1
0
mirror of https://github.com/gsi-upm/soil synced 2025-09-13 19:52:20 +00:00

Compare commits

...

89 Commits

Author SHA1 Message Date
J. Fernando Sánchez
880a9f2a1c black formatting 2022-10-17 20:23:57 +02:00
J. Fernando Sánchez
227fdf050e Fix conditionals 2022-10-17 19:29:39 +02:00
J. Fernando Sánchez
5d759d0072 Add conditional time values 2022-10-17 13:58:14 +02:00
J. Fernando Sánchez
77d08fc592 Agent step can be a generator 2022-10-17 08:58:51 +02:00
J. Fernando Sánchez
0efcd24d90 Improve exporters 2022-10-16 21:57:30 +02:00
J. Fernando Sánchez
78833a9e08 Formatted with black 2022-10-16 17:58:19 +02:00
J. Fernando Sánchez
d9947c2c52 WIP: all tests pass
Documentation needs some improvement

The API has been simplified to only allow for ONE topology per
NetworkEnvironment.
This covers the main use case, and simplifies the code.
2022-10-16 17:56:23 +02:00
J. Fernando Sánchez
cd62c23cb9 WIP: all tests pass 2022-10-13 22:43:16 +02:00
J. Fernando Sánchez
f811ee18c5 WIP 2022-10-06 15:49:19 +02:00
J. Fernando Sánchez
0a9c6d8b19 WIP: removed stats 2022-09-16 18:14:16 +02:00
J. Fernando Sánchez
3dc56892c1 WIP: working config 2022-09-15 19:27:17 +02:00
J. Fernando Sánchez
e41dc3dae2 WIP 2022-09-13 18:16:31 +02:00
J. Fernando Sánchez
bbaed636a8 WIP 2022-07-19 17:18:02 +02:00
J. Fernando Sánchez
6f7481769e WIP 2022-07-19 17:17:23 +02:00
J. Fernando Sánchez
1a8313e4f6 WIP 2022-07-19 17:12:41 +02:00
J. Fernando Sánchez
a40aa55b6a Release 0.20.7 2022-07-06 09:23:46 +02:00
J. Fernando Sánchez
50cba751a6 Release 0.20.6 2022-07-05 12:08:34 +02:00
J. Fernando Sánchez
dfb6d13649 version 0.20.5 2022-05-18 16:13:53 +02:00
J. Fernando Sánchez
5559d37e57 version 0.20.4 2022-05-18 15:20:58 +02:00
J. Fernando Sánchez
2116fe6f38 Bug fixes and minor improvements 2022-05-12 16:14:47 +02:00
J. Fernando Sánchez
affeeb9643 Update examples 2022-04-04 16:47:58 +02:00
J. Fernando Sánchez
42ddc02318 CI: delay PyPI check 2022-03-07 14:35:07 +01:00
J. Fernando Sánchez
cab9a3440b Fix typo CI/CD 2022-03-07 13:57:25 +01:00
J. Fernando Sánchez
db505da49c Minor CI change 2022-03-07 13:35:02 +01:00
J. Fernando Sánchez
8eb8eb16eb Minor CI change 2022-03-07 12:51:22 +01:00
J. Fernando Sánchez
3fc5ca8c08 Fix requirements issue CI/CD 2022-03-07 12:46:01 +01:00
J. Fernando Sánchez
c02e6ea2e8 Fix die bug 2022-03-07 11:17:27 +01:00
J. Fernando Sánchez
38f8a8d110 Merge branch 'mesa'
First iteration to achieve MESA compatibility.
As a side effect, we have removed `simpy`.

For a full list of changes, see `CHANGELOG.md`.
2022-03-07 10:54:47 +01:00
J. Fernando Sánchez
cb72aac980 Add random activation example 2022-03-07 10:48:59 +01:00
J. Fernando Sánchez
6c4f44b4cb Partial MESA compatibility and several fixes
Documentation for the new APIs is still a work in progress :)
2021-10-15 20:16:49 +02:00
J. Fernando Sánchez
af9a392a93 WIP: mesa compat
All tests pass but some features are still missing/unclear:

- Mesa agents do not have a `state`, so their "metrics" don't get stored. I will
probably refactor this to remove some magic in this regard. This should get rid
of the `_state` dictionary and the setitem/getitem magic.
- Simulation is still different from a runner. So far only Agent and
Environment/Model have been updated.
2021-10-15 13:36:39 +02:00
J. Fernando Sánchez
5d7e57675a WIP: mesa compatibility 2021-10-14 17:37:06 +02:00
J. Fernando Sánchez
e860bdb922 v0.15.2
See CHANGELOG.md for a complete list of changes
2021-05-22 16:33:52 +02:00
J. Fernando Sánchez
d6b684c1c1 Fix docs requirements 2021-05-22 16:08:38 +02:00
J. Fernando Sánchez
05f7f49233 Refactoring v0.15.1
See CHANGELOG.md for a full list of changes

* Removed nxsim
* Refactored `agents.NetworkAgent` and `agents.BaseAgent`
* Refactored exporters
* Added stats to history
2020-11-19 23:58:47 +01:00
J. Fernando Sánchez
3b2c6a3db5 Seed before env initialization
Fixes #6
2020-07-27 12:29:24 +02:00
J. Fernando Sánchez
6118f917ee Fix Windows bug
Update URLs to gsi.upm.es
2020-07-07 10:57:10 +02:00
J. Fernando Sánchez
6adc8d36ba minor change in docs 2020-03-13 12:50:05 +01:00
J. Fernando Sánchez
c8b8149a17 Updated to 0.14.6
Fix compatibility issues with newer networkx and pandas versions
2020-03-11 16:17:14 +01:00
J. Fernando Sánchez
6690b6ee5f Fix incompatibility and bug in get_agents 2019-05-16 19:59:46 +02:00
J. Fernando Sánchez
97835b3d10 Clean up exporters 2019-05-03 13:17:27 +02:00
J. Fernando Sánchez
b0add8552e Tag version 0.14.0 2019-04-30 16:26:08 +02:00
J. Fernando Sánchez
1cf85ea450 Avoid writing gexf in test 2019-04-30 16:16:46 +02:00
J. Fernando Sánchez
c32e167fb8 Bump pyyaml to 5.1 2019-04-30 16:04:12 +02:00
J. Fernando Sánchez
5f68b5321d Pinning scipy to 1.2.1
1.3.0rc1 is not compatible with salib
2019-04-30 15:52:37 +02:00
J. Fernando Sánchez
2a2843bd19 Add tests exporters 2019-04-30 09:28:53 +02:00
J. Fernando Sánchez
d1006bd55c WIP: exporters 2019-04-29 18:47:15 +02:00
J. Fernando Sánchez
9bc036d185 WIP: exporters 2019-04-26 19:22:45 +02:00
J. Fernando Sánchez
a3ea434f23 0.13.8 2019-02-19 21:17:19 +01:00
J. Fernando Sánchez
65f6aa72f3 fix timeout in FSM. Improve logs 2019-02-01 19:05:07 +01:00
J. Fernando Sánchez
09e14c6e84 Add generator and programmatic examples 2018-12-20 19:25:33 +01:00
J. Fernando Sánchez
8593ac999d Swap test and build in CI. Remove tests in tags 2018-12-20 17:56:33 +01:00
J. Fernando Sánchez
90338c3549 skip-tls-verify in kaniko 2018-12-20 17:48:58 +01:00
J. Fernando Sánchez
1d532dacfe Remove entrypoint build stage 2018-12-20 15:14:58 +01:00
J. Fernando Sánchez
a1f8d8c9c5 Change entrypoint build stage 2018-12-20 15:07:45 +01:00
J. Fernando Sánchez
de326eb331 Remove CI global image 2018-12-20 15:05:45 +01:00
J. Fernando Sánchez
04b4380c61 Fix wrong import soil.web 2018-12-20 14:06:18 +01:00
J. Fernando Sánchez
d70a0c865c limit ci jobs to docker runners 2018-12-09 17:22:40 +01:00
J. Fernando Sánchez
625c28e4ee Fix CI syntax 2018-12-09 17:09:31 +01:00
J. Fernando Sánchez
9749f4ca14 Fix multithreading
Multithreading needs pickling to work.
Pickling/unpickling didn't work in some situations, like when the
environment_agents parameter was left blank.
This was due to two reasons:

1) agents and history didn't have a setstate method, and some of their
attributes cannot be pickled (generators, sqlite connection)
2) the environment was adding generators (agents) to its state.

This fixes the situation by restricting the keys that the environment exports
when it pickles, and by adding the set/getstate methods in agents.

The resulting pickles should contain enough information to inspect
them (history, state values, etc), but very limited.
2018-12-09 16:58:49 +01:00
J. Fernando Sánchez
3526fa29d7 Fix bug parallel 2018-12-09 14:06:50 +01:00
J. Fernando Sánchez
53604c1e66 Fix quickstart.rst markdown code 2018-12-09 13:10:00 +01:00
J. Fernando Sánchez
01cc8e9128 Merge branch 'refactor-imports'
* remove leftover import in example
* Update quickstart tutorial
* Add gitlab-ci
* Added missing gexf for tests
* Upgrade to python3.7 and pandas 0.3.4 because pandas has dropped support for
  python 3.4 -> There are some API changes in pandas, and I've updated the code
  accordingly.
* Set pytest as the default test runner
* Update dockerignore
* Skip testing long examples (>1000 steps)
2018-12-09 12:55:12 +01:00
J. Fernando Sánchez
a47ffa815b Fix CI. Skip testing long examples 2018-12-08 20:49:34 +01:00
J. Fernando Sánchez
b41927d7bf remove leftover import in example 2018-12-08 20:35:02 +01:00
J. Fernando Sánchez
70d033b3a9 Update dockerignore 2018-12-08 19:13:56 +01:00
J. Fernando Sánchez
3afed06656 Add gitlab-ci 2018-12-08 19:08:47 +01:00
J. Fernando Sánchez
0a7ef27844 Added missing gexf for tests 2018-12-08 18:53:12 +01:00
J. Fernando Sánchez
2e28b36f6e Python3.7, testing and bug fixes
* Upgrade to python3.7 and pandas 0.3.4 because pandas has dropped support for
python 3.4 -> There are some API changes in pandas, and I've update the code
accordingly.
* Set pytest as the default test runner
2018-12-08 18:53:06 +01:00
J. Fernando Sánchez
bd4700567e Update quickstart tutorial 2018-12-08 18:17:25 +01:00
J. Fernando Sánchez
ff1df62eec All tests pass 2018-12-08 18:17:21 +01:00
J. Fernando Sánchez
9165979b49 merge visualization branch
The web server is included as a submodule.
The dependencies for the web (tornado) are not installed by default, but they
can be installed as an extra:

```
pip install soil[web]
```

Once installed, the soil web can be used like this:

```
soil-web

OR

python -m soil.web
```

There are other minor changes:

* History re-connects to the sqlite database if it is used from a different
thread.
* Environment accepts additional parameters (so it can run simulations with
`visualization_params` or any other in the future).
* The simulator class is no longer necessary
* Logging is done in the same thread, and the simulation is run in a separate
one. This had to be done because it was creating some problems with tornado not
being able to find the current thread during logs, which caused hundreds of
repeated lines in the web "console".
* The player is slightly modified in this version. I noticed that when the
  visualization was playing, if you clicked somewhere it would change for a
  second, and then go back to the previous place. The code for the playback
  seemed too complex, especially speed control, so I rewrote some parts. I
  might've introduced new bugs.
2018-12-07 18:28:19 +01:00
J. Fernando Sánchez
078f8ace9e Merge commit '8fec544772c13efb1dc8a0589240551b9bad27cb' as 'soil/web' 2018-12-07 18:27:57 +01:00
J. Fernando Sánchez
8fec544772 Squashed 'soil/web/' content from commit 4dcd0fc
git-subtree-dir: soil/web
git-subtree-split: 4dcd0fcb3d
2018-12-07 20:30:24 +01:00
J. Fernando Sánchez
5420501d36 Fix state and networkx dynamic attributes 2018-05-07 18:59:19 +02:00
J. Fernando Sánchez
5d89827ccf Fix history bug 2018-05-04 11:21:23 +02:00
J. Fernando Sánchez
fc48ed7e09 Added history class
Now the environment does not deal with history directly, it delegates it to a
specific class. The analysis also uses history instances instead of either
using the database directly or creating a proxy environment.

This should make it easier to change the implementation in the future.

In fact, the change was motivated by the large size of the csv files in previous
versions. This new implementation only stores results in deltas, and it fills
any necessary values when needed.
2018-05-04 10:01:49 +02:00
J. Fernando Sánchez
73c90887e8 Fix pip installation 2018-05-04 09:59:31 +02:00
J. Fernando Sánchez
497c8a55db Add workaround for geometric models
Closes soil/soil#4
2018-02-16 18:04:43 +01:00
J. Fernando Sánchez
7d1c800490 Parallelism and granular exporting options
* Graphs are not saved by default (not backwards compatible)
* Modified newsspread examples
* More granular options to save results (exporting to CSV and GEXF are now
optional)
* Updated tutorial to include exporting options
* Removed references from environment to simulation
* Added parallelism to simulations (can be turned off with a flag or argument).
2017-11-01 14:44:46 +01:00
J. Fernando Sánchez
a4b32afa2f Fix py3.4 and pypi bugs 2017-10-19 18:28:17 +02:00
J. Fernando Sánchez
a7c51742f6 Improved docs
Fixed several bugs
Added convenience methods in soil.analysis
2017-10-19 18:06:33 +02:00
J. Fernando Sánchez
78364d89d5 Fix gephi representation. Add sqlite 2017-10-17 19:48:56 +02:00
J. Fernando Sánchez
af76f54a28 Added rabbits 2017-10-16 19:23:52 +02:00
J. Fernando Sánchez
dbc182c6d0 Compatibility with py3.4 2017-10-09 14:44:21 +02:00
J. Fernando Sánchez
eafecc9e5e Make py3 compatibility explicit 2017-10-09 11:38:16 +02:00
J. Fernando Sánchez
e8988015e2 Add more options to the command line 2017-10-05 16:21:58 +02:00
J. Fernando Sánchez
ccc8e43416 Removed timeout from the simulation examples 2017-10-05 16:07:10 +02:00
J. Fernando Sánchez
347d295b09 Updated to match NetworkX's 2.0 API 2017-10-05 15:54:18 +02:00
163 changed files with 123502 additions and 5280 deletions

5
.dockerignore Normal file
View File

@@ -0,0 +1,5 @@
**/soil_output
.*
**/__pycache__
__pycache__
*.pyc

1
.gitignore vendored
View File

@@ -8,3 +8,4 @@ soil_output
docs/_build*
build/*
dist/*
prof

53
.gitlab-ci.yml Normal file
View File

@@ -0,0 +1,53 @@
stages:
- test
- publish
- check_published
docker:
stage: publish
image:
name: gcr.io/kaniko-project/executor:debug
entrypoint: [""]
tags:
- docker
script:
- echo "{\"auths\":{\"$CI_REGISTRY\":{\"username\":\"$CI_REGISTRY_USER\",\"password\":\"$CI_REGISTRY_PASSWORD\"}}}" > /kaniko/.docker/config.json
# The skip-tls-verify flag is there because our registry certificate is self signed
- /kaniko/executor --context $CI_PROJECT_DIR --skip-tls-verify --dockerfile $CI_PROJECT_DIR/Dockerfile --destination $CI_REGISTRY_IMAGE:$CI_COMMIT_TAG
only:
- tags
test:
tags:
- docker
image: python:3.7
stage: test
script:
- pip install -r requirements.txt -r test-requirements.txt
- python setup.py test
push_pypi:
only:
- tags
tags:
- docker
image: python:3.7
stage: publish
script:
- echo $CI_COMMIT_TAG > soil/VERSION
- pip install twine
- python setup.py sdist bdist_wheel
- TWINE_PASSWORD=$PYPI_PASSWORD TWINE_USERNAME=$PYPI_USERNAME python -m twine upload dist/*
check_pypi:
only:
- tags
tags:
- docker
image: python:3.7
stage: check_published
script:
- pip install soil==$CI_COMMIT_TAG
# Allow PYPI to update its index before we try to install
when: delayed
start_in: 2 minutes

180
CHANGELOG.md Normal file
View File

@@ -0,0 +1,180 @@
# Changelog
All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## [0.3 UNRELEASED]
### Added
* Simple debugging capabilities in `soil.debugging`, with a custom `pdb.Debugger` subclass that exposes commands to list agents and their status and set breakpoints on states (for FSM agents). Try it with `soil --debug <simulation file>`
* Ability to run
* Ability to
* The `soil.exporters` module to export the results of datacollectors (model.datacollector) into files at the end of trials/simulations
* A modular set of classes for environments/models. Now the ability to configure the agents through an agent definition and a topology through a network configuration is split into two classes (`soil.agents.BaseEnvironment` for agents, `soil.agents.NetworkEnvironment` to add topology).
* FSM agents can now have generators as states. They work similar to normal states, with one caveat. Only `time` values can be yielded, not a state. This is because the state will not change, it will be resumed after the yield, at the appropriate time. The return value *can* be a state, or a `(state, time)` tuple, just like in normal states.
### Changed
* Configuration schema is very different now. Check `soil.config` for more information. We are also using Pydantic for (de)serialization.
* There may be more than one topology/network in the simulation
* Ability
### Removed
* Any `tsih` and `History` integration in the main classes. To record the state of environments/agents, just use a datacollector. In some cases this may be slower or consume more memory than the previous system. However, few cases actually used the full potential of the history, and it came at the cost of unnecessary complexity and worse performance for the majority of cases.
## [0.20.7]
### Changed
* Creating a `time.When` from another `time.When` does not nest them anymore (it returns the argument)
### Fixed
* Bug with time.NEVER/time.INFINITY
## [0.20.6]
### Fixed
* Agents now return `time.INFINITY` when dead, instead of 'inf'
* `soil.__init__` does not re-export built-in time (change in `soil.simulation`. It used to create subtle import conflicts when importing soil.time.
* Parallel simulations were broken because lambdas cannot be pickled properly, which is needed for multiprocessing.
### Changed
* Some internal simulation methods do not accept `*args` anymore, to avoid ambiguity and bugs.
## [0.20.5]
### Changed
* Defaults are now set in the agent __init__, not in the environment. This decouples both classes a bit more, and it is more intuitive
## [0.20.4]
### Added
* Agents can now be given any kwargs, which will be used to set their state
* Environments have a default logger `self.logger` and a log method, just like agents
## [0.20.3]
### Fixed
* Default state values are now deepcopied again.
* Seeds for environments only concatenate the trial id (i.e., a number), to provide repeatable results.
* `Environment.run` now calls `Environment.step`, to allow for easy overloading of the environment step
### Removed
* Datacollectors are not being used for now.
* `time.TimedActivation.step` does not use an `until` parameter anymore.
### Changed
* Simulations now run right up to `until` (open interval)
* Time instants (`time.When`) don't need to be floats anymore. Now we can avoid precision issues with big numbers by using ints.
* Rabbits simulation is more idiomatic (using subclasses)
## [0.20.2]
### Fixed
* CI/CD testing issues
## [0.20.1]
### Fixed
* Agents would run another step after dying.
## [0.20.0]
### Added
* Integration with MESA
* `not_agent_ids` parameter to get sql in history
### Changed
* `soil.Environment` now also inherits from `mesa.Model`
* `soil.Agent` now also inherits from `mesa.Agent`
* `soil.time` to replace `simpy` events, delays, duration, etc.
* `agent.id` is not `agent.unique_id` to be compatible with `mesa`. A property `BaseAgent.id` has been added for compatibility.
* `agent.environment` is now `agent.model`, for the same reason as above. The parameter name in `BaseAgent.__init__` has also been renamed.
### Removed
* `simpy` dependency and compatibility. Each agent used to be a simpy generator, but that made debugging and error handling more complex. That has been replaced by a scheduler within the `soil.Environment` class, similar to how `mesa` does it.
* `soil.history` is now a separate package named `tsih`. The keys namedtuple uses `dict_id` instead of `agent_id`.
### Added
* An option to choose whether a database should be used for history
## [0.15.2]
### Fixed
* Pass the right known_modules and parameters to stats discovery in simulation
* The configuration file must exist when launching through the CLI. If it doesn't, an error will be logged
* Minor changes in the documentation of the CLI arguments
### Changed
* Stats are now exported by default
## [0.15.1]
### Added
* read-only `History`
### Fixed
* Serialization problem with the `Environment` on parallel mode.
* Analysis functions now work as they should in the tutorial
## [0.15.0]
### Added
* Control logging level in CLI and simulation
* `Stats` to calculate trial and simulation-wide statistics
* Simulation statistics are stored in a separate table in history (see `History.get_stats` and `History.save_stats`, as well as `soil.stats`)
* Aliased `NetworkAgent.G` to `NetworkAgent.topology`.
### Changed
* Templates in config files can be given as dictionaries in addition to strings
* Samplers are used more explicitly
* Removed nxsim dependency. We had already made a lot of changes, and nxsim has not been updated in 5 years.
* Exporter methods renamed to `trial` and `end`. Added `start`.
* `Distribution` exporter now a stats class
* `global_topology` renamed to `topology`
* Moved topology-related methods to `NetworkAgent`
### Fixed
* Temporary files used for history in dry_run mode are not longer left open
## [0.14.9]
### Changed
* Seed random before environment initialization
## [0.14.8]
### Fixed
* Invalid directory names in Windows gsi-upm/soil#5
## [0.14.7]
### Changed
* Minor change to traceback handling in async simulations
### Fixed
* Incomplete example in the docs (example.yml) caused an exception
## [0.14.6]
### Fixed
* Bug with newer versions of networkx (0.24) where the Graph.node attribute has been removed. We have updated our calls, but the code in nxsim is not under our control, so we have pinned the networkx version until that issue is solved.
### Changed
* Explicit yaml.SafeLoader to avoid deprecation warnings when using yaml.load. It should not break any existing setups, but we could move to the FullLoader in the future if needed.
## [0.14.4]
### Fixed
* Bug in `agent.get_agents()` when `state_id` is passed as a string. The tests have been modified accordingly.
## [0.14.3]
### Fixed
* Incompatibility with py3.3-3.6 due to ModuleNotFoundError and TypeError in DryRunner
## [0.14.2]
### Fixed
* Output path for exporters is now soil_output
### Changed
* CSV output to stdout in dry_run mode
## [0.14.1]
### Changed
* Exporter names in lower case
* Add default exporter in runs
## [0.14.0]
### Added
* Loading configuration from template definitions in the yaml, in preparation for SALib support.
The definition of the variables and their possible values (i.e., a problem in SALib terms), as well as a sampler function, can be provided.
Soil uses this definition and the template to generate a set of configurations.
* Simulation group names, to link related simulations. For now, they are only used to group all simulations in the same group under the same folder.
* Exporters unify exporting/dumping results and other files to disk. If `dry_run` is set to `True`, exporters will write to stdout instead of a file (useful for testing/debugging).
* Distribution exporter, to write statistics about values and value_counts in every simulation. The results are dumped to two CSV files.
### Changed
* `dir_path` is now the directory for resources (modules, files)
* Environments and simulations do not export or write anything by default. That task is delegated to Exporters
### Removed
* The output dir for environments and simulations (see Exporters)
* DrawingAgent, because it wrote to disk and was not being used. We provide a partial alternative in the form of the GraphDrawing exporter. A complete alternative will be provided once the network at each state can be accessed by exporters.
## Fixed
* Modules with custom agents/environments failed to load when they were run from outside the directory of the definition file. Modules are now loaded from the directory of the simulation file in addition to the working directory
* Memory databases (in history) can now be shared between threads.
* Testing all examples, not just subdirectories
## [0.13.8]
### Changed
* Moved TerroristNetworkModel to examples
### Added
* `get_agents` and `count_agents` methods now accept lists as inputs. They can be used to retrieve agents from node ids
* `subgraph` in BaseAgent
* `agents.select` method, to filter out agents
* `skip_test` property in yaml definitions, to force skipping some examples
* `agents.Geo`, with a search function based on postition
* `BaseAgent.ego_search` to get nodes from the ego network of a node
* `BaseAgent.degree` and `BaseAgent.betweenness`
### Fixed
## [0.13.7]
### Changed
* History now defaults to not backing up! This makes it more intuitive to load the history for examination, at the expense of rewriting something. That should not happen because History is only created in the Environment, and that has `backup=True`.
### Added
* Agent names are assigned based on their agent types
* Agent logging uses the agent name.
* FSM agents can now return a timeout in addition to a new state. e.g. `return self.idle, self.env.timeout(2)` will execute the *different_state* in 2 *units of time* (`t_step=now+2`).
* Example of using timeouts in FSM (custom_timeouts)
* `network_agents` entries may include an `ids` entry. If set, it should be a list of node ids that should be assigned that agent type. This complements the previous behavior of setting agent type with `weights`.

12
Dockerfile Normal file
View File

@@ -0,0 +1,12 @@
FROM python:3.7
WORKDIR /usr/src/app
COPY test-requirements.txt requirements.txt /usr/src/app/
RUN pip install --no-cache-dir -r test-requirements.txt -r requirements.txt
COPY ./ /usr/src/app
RUN pip install '.[web]'
ENTRYPOINT ["python", "-m", "soil"]

View File

@@ -1,4 +1,7 @@
include requirements.txt
include test-requirements.txt
include README.rst
graft soil
graft soil
global-exclude __pycache__
global-exclude soil_output
global-exclude *.py[co]

7
Makefile Normal file
View File

@@ -0,0 +1,7 @@
quick-test:
docker-compose exec dev python -m pytest -s -v
test:
docker run -t -v $$PWD:/usr/src/app -w /usr/src/app python:3.7 python setup.py test
.PHONY: test

View File

@@ -3,7 +3,46 @@
Soil is an extensible and user-friendly Agent-based Social Simulator for Social Networks.
Learn how to run your own simulations with our [documentation](http://soilsim.readthedocs.io).
Follow our [tutorial](notebooks/soil_tutorial.ipynb) to develop your own agent models.
Follow our [tutorial](examples/tutorial/soil_tutorial.ipynb) to develop your own agent models.
# Changes in version 0.3
Version 0.3 came packed with many changes to provide much better integration with MESA.
For a long time, we tried to keep soil backwards-compatible, but it turned out to be a big endeavour and the resulting code was less readable.
This translates to harder maintenance and a worse experience for newcomers.
In the end, we decided to make some breaking changes.
If you have an older Soil simulation, you have two options:
* Update the necessary configuration files and code. You may use the examples in the `examples` folder for reference, as well as the documentation.
* Keep using a previous `soil` version.
## Mesa compatibility
Soil is in the process of becoming fully compatible with MESA.
The idea is to provide a set of modular classes and functions that extend the functionality of mesa, whilst staying compatible.
In the end, it should be possible to add regular mesa agents to a soil simulation, or use a soil agent within a mesa simulation/model.
This is a non-exhaustive list of tasks to achieve compatibility:
- [ ] Integrate `soil.Simulation` with mesa's runners:
- [ ] `soil.Simulation` could mimic/become a `mesa.batchrunner`
- [ ] Integrate `soil.Environment` with `mesa.Model`:
- [x] `Soil.Environment` inherits from `mesa.Model`
- [x] `Soil.Environment` includes a Mesa-like Scheduler (see the `soil.time` module.
- [ ] Allow for `mesa.Model` to be used in a simulation.
- [ ] Integrate `soil.Agent` with `mesa.Agent`:
- [x] Rename agent.id to unique_id?
- [x] mesa agents can be used in soil simulations (see `examples/mesa`)
- [ ] Provide examples
- [ ] Using mesa modules in a soil simulation
- [ ] Using soil modules in a mesa simulation
- [ ] Document the new APIs and usage
## Citation
If you use Soil in your research, don't forget to cite this paper:
@@ -28,7 +67,6 @@ If you use Soil in your research, don't forget to cite this paper:
```
@Copyright GSI - Universidad Politécnica de Madrid 2017
[![SOIL](logo_gsi.png)](https://www.gsi.dit.upm.es)
@Copyright GSI - Universidad Politécnica de Madrid 2017-2021
[![SOIL](logo_gsi.png)](https://www.gsi.upm.es)

12
docker-compose.yml Normal file
View File

@@ -0,0 +1,12 @@
version: '3'
services:
dev:
build: .
environment:
PYTHONDONTWRITEBYTECODE: 1
volumes:
- .:/usr/src/app
tty: true
entrypoint: /bin/bash
ports:
- '8001:8001'

File diff suppressed because it is too large Load Diff

View File

@@ -31,7 +31,7 @@
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
# ones.
extensions = []
extensions = ['IPython.sphinxext.ipython_console_highlighting']
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
@@ -69,7 +69,7 @@ language = None
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
# This patterns also effect to html_static_path and html_extra_path
exclude_patterns = ['_build', 'Thumbs.db', '.DS_Store']
exclude_patterns = ['_build', 'Thumbs.db', '.DS_Store', '**.ipynb_checkpoints']
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'

262
docs/configuration.rst Normal file
View File

@@ -0,0 +1,262 @@
Configuring a simulation
------------------------
There are two ways to configure a simulation: programmatically and with a configuration file.
In both cases, the parameters used are the same.
The advantage of a configuration file is that it is a clean declarative description, and it makes it easier to reproduce.
Simulation configuration files can be formatted in ``json`` or ``yaml`` and they define all the parameters of a simulation.
Here's an example (``example.yml``).
.. literalinclude:: example.yml
:language: yaml
This example configuration will run three trials (``num_trials``) of a simulation containing a randomly generated network (``network_params``).
The 100 nodes in the network will be SISaModel agents (``network_agents.agent_class``), which is an agent behavior that is included in Soil.
10% of the agents (``weight=1``) will start in the content state, 10% in the discontent state, and the remaining 80% (``weight=8``) in the neutral state.
All agents will have access to the environment (``environment_params``), which only contains one variable, ``prob_infected``.
The state of the agents will be updated every 2 seconds (``interval``).
Now run the simulation with the command line tool:
.. code:: bash
soil example.yml
Once the simulation finishes, its results will be stored in a folder named ``MyExampleSimulation``.
Three types of objects are saved by default: a pickle of the simulation; a ``YAML`` representation of the simulation (which can be used to re-launch it); and for every trial, a ``sqlite`` file with the content of the state of every network node and the environment parameters at every step of the simulation.
.. code::
soil_output
└── MyExampleSimulation
├── MyExampleSimulation.dumped.yml
├── MyExampleSimulation.simulation.pickle
├── MyExampleSimulation_trial_0.db.sqlite
├── MyExampleSimulation_trial_1.db.sqlite
└── MyExampleSimulation_trial_2.db.sqlite
You may also ask soil to export the states in a ``csv`` file, and the network in gephi format (``gexf``).
Network
=======
The network topology for the simulation can be loaded from an existing network file or generated with one of the random network generation methods from networkx.
Loading a network
#################
To load an existing network, specify its path in the configuration:
.. code:: yaml
---
network_params:
path: /tmp/mynetwork.gexf
Soil will try to guess what networkx method to use to read the file based on its extension.
However, we only test using ``gexf`` files.
For simple networks, you may also include them in the configuration itself using , using the ``topology`` parameter like so:
.. code:: yaml
---
topology:
nodes:
- id: First
- id: Second
links:
- source: First
target: Second
Generating a random network
###########################
To generate a random network using one of networkx's built-in methods, specify the `graph generation algorithm <https://networkx.github.io/documentation/development/reference/generators.html>`_ and other parameters.
For example, the following configuration is equivalent to :code:`nx.complete_graph(n=100)`:
.. code:: yaml
network_params:
generator: complete_graph
n: 100
Environment
============
The environment is the place where the shared state of the simulation is stored.
That means both global parameters, such as the probability of disease outbreak.
But it also means other data, such as a map, or a network topology that connects multiple agents.
As a result, it is also typical to add custom functions in an environment that help agents interact with each other and with the state of the simulation.
Last but not least, an environment controls when and how its agents will be executed.
By default, soil environments incorporate a ``soil.time.TimedActivation`` model for agent execution (more on this on the following section).
Soil environments are very similar, and often interchangeable with, mesa models (``mesa.Model``).
A configuration may specify the initial value of the environment parameters:
.. code:: yaml
environment_params:
daily_probability_of_earthquake: 0.001
number_of_earthquakes: 0
All agents have access to the environment (and its parameters).
In some scenarios, it is useful to have a custom environment, to provide additional methods or to control the way agents update environment state.
For example, if our agents play the lottery, the environment could provide a method to decide whether the agent wins, instead of leaving it to the agent.
Agents
======
Agents are a way of modelling behavior.
Agents can be characterized with two variables: agent type (``agent_class``) and state.
The agent type is a ``soil.Agent`` class, which contains the code that encapsulates the behavior of the agent.
The state is a set of variables, which may change during the simulation, and that the code may use to control the behavior.
All agents provide a ``step`` method either explicitly or implicitly (by inheriting it from a superclass), which controls how the agent will behave in each step of the simulation.
When and how agent steps are executed in a simulation depends entirely on the ``environment``.
Most environments will internally use a scheduler (``mesa.time.BaseScheduler``), which controls the activation of agents.
In soil, we generally used the ``soil.time.TimedActivation`` scheduler, which allows agents to specify when their next activation will happen, defaulting to a
When an agent's step is executed (generally, every ``interval`` seconds), the agent has access to its state and the environment.
Through the environment, it can access the network topology and the state of other agents.
There are two types of agents according to how they are added to the simulation: network agents and environment agent.
Network Agents
##############
Network agents are attached to a node in the topology.
The configuration file allows you to specify how agents will be mapped to topology nodes.
The simplest way is to specify a single type of agent.
Hence, every node in the network will be associated to an agent of that type.
.. code:: yaml
agent_class: SISaModel
It is also possible to add more than one type of agent to the simulation.
To control the ratio of each type (using the ``weight`` property).
For instance, with following configuration, it is five times more likely for a node to be assigned a CounterModel type than a SISaModel type.
.. code:: yaml
network_agents:
- agent_class: SISaModel
weight: 1
- agent_class: CounterModel
weight: 5
The third option is to specify the type of agent on the node itself, e.g.:
.. code:: yaml
topology:
nodes:
- id: first
agent_class: BaseAgent
states:
first:
agent_class: SISaModel
This would also work with a randomly generated network:
.. code:: yaml
network:
generator: complete
n: 5
agent_class: BaseAgent
states:
- agent_class: SISaModel
In addition to agent type, you may add a custom initial state to the distribution.
This is very useful to add the same agent type with different states.
e.g., to populate the network with SISaModel, roughly 10% of them with a discontent state:
.. code:: yaml
network_agents:
- agent_class: SISaModel
weight: 9
state:
id: neutral
- agent_class: SISaModel
weight: 1
state:
id: discontent
Lastly, the configuration may include initial state for one or more nodes.
For instance, to add a state for the two nodes in this configuration:
.. code:: yaml
agent_class: SISaModel
network:
generator: complete_graph
n: 2
states:
- id: content
- id: discontent
Or to add state only to specific nodes (by ``id``).
For example, to apply special skills to Linux Torvalds in a simulation:
.. literalinclude:: ../examples/torvalds.yml
:language: yaml
Environment Agents
##################
In addition to network agents, more agents can be added to the simulation.
These agents are programmed in much the same way as network agents, the only difference is that they will not be assigned to network nodes.
.. code::
environment_agents:
- agent_class: MyAgent
state:
mood: happy
- agent_class: DummyAgent
You may use environment agents to model events that a normal agent cannot control, such as natural disasters or chance.
They are also useful to add behavior that has little to do with the network and the interactions within that network.
Templating
==========
Sometimes, it is useful to parameterize a simulation and run it over a range of values in order to compare each run and measure the effect of those parameters in the simulation.
For instance, you may want to run a simulation with different agent distributions.
This can be done in Soil using **templates**.
A template is a configuration where some of the values are specified with a variable.
e.g., ``weight: "{{ var1 }}"`` instead of ``weight: 1``.
There are two types of variables, depending on how their values are decided:
* Fixed. A list of values is provided, and a new simulation is run for each possible value. If more than a variable is given, a new simulation will be run per combination of values.
* Bounded/Sampled. The bounds of the variable are provided, along with a sampler method, which will be used to compute all the configuration combinations.
When fixed and bounded variables are mixed, Soil generates a new configuration per combination of fixed values and bounded values.
Here is an example with a single fixed variable and two bounded variable:
.. literalinclude:: ../examples/template.yml
:language: yaml

35
docs/example.yml Normal file
View File

@@ -0,0 +1,35 @@
---
name: MyExampleSimulation
max_time: 50
num_trials: 3
interval: 2
network_params:
generator: barabasi_albert_graph
n: 100
m: 2
network_agents:
- agent_class: SISaModel
weight: 1
state:
id: content
- agent_class: SISaModel
weight: 1
state:
id: discontent
- agent_class: SISaModel
weight: 8
state:
id: neutral
environment_params:
prob_infect: 0.075
neutral_discontent_spon_prob: 0.1
neutral_discontent_infected_prob: 0.3
neutral_content_spon_prob: 0.3
neutral_content_infected_prob: 0.4
discontent_neutral: 0.5
discontent_content: 0.5
variance_d_c: 0.2
content_discontent: 0.2
variance_c_d: 0.2
content_neutral: 0.2
standard_variance: 1

View File

@@ -6,7 +6,7 @@
Welcome to Soil's documentation!
================================
Soil is an Agent-based Social Simulator in Python for modelling and simulation of Social Networks.
Soil is an Agent-based Social Simulator in Python focused on Social Networks.
If you use Soil in your research, do not forget to cite this paper:
@@ -34,13 +34,15 @@ If you use Soil in your research, do not forget to cite this paper:
.. toctree::
:maxdepth: 2
:maxdepth: 0
:caption: Learn more about soil:
installation
quickstart
Tutorial - Spreading news
configuration
Tutorial <soil_tutorial>
..
.. Indices and tables

View File

@@ -1,7 +1,7 @@
Installation
------------
The easiest way to install Soil is through pip:
The easiest way to install Soil is through pip, with Python >= 3.4:
.. code:: bash
@@ -14,11 +14,11 @@ Now test that it worked by running the command line tool
soil --help
Or using soil programmatically:
Or, if you're using using soil programmatically:
.. code:: python
import soil
print(soil.__version__)
The latest version can be installed through `GitLab <https://lab.cluster.gsi.dit.upm.es/soil/soil.git>`_.
The latest version can be installed through `GitLab <https://lab.gsi.upm.es/soil/soil.git>`_ or `GitHub <https://github.com/gsi-upm/soil>`_.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 8.5 KiB

After

Width:  |  Height:  |  Size: 7.0 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 12 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 17 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 16 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 15 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 14 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 11 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 12 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 12 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 12 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 11 KiB

BIN
docs/output_54_0.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 14 KiB

BIN
docs/output_54_1.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 14 KiB

BIN
docs/output_55_0.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 14 KiB

BIN
docs/output_55_1.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 14 KiB

BIN
docs/output_55_2.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 16 KiB

BIN
docs/output_55_3.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 16 KiB

BIN
docs/output_55_4.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 15 KiB

BIN
docs/output_55_5.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 15 KiB

BIN
docs/output_55_6.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 16 KiB

BIN
docs/output_55_7.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 16 KiB

BIN
docs/output_55_8.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 16 KiB

BIN
docs/output_55_9.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 16 KiB

BIN
docs/output_56_0.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 13 KiB

BIN
docs/output_56_1.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 13 KiB

BIN
docs/output_56_2.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 13 KiB

BIN
docs/output_56_3.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 13 KiB

BIN
docs/output_56_4.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 13 KiB

BIN
docs/output_56_5.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 13 KiB

BIN
docs/output_56_6.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 13 KiB

BIN
docs/output_56_7.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 13 KiB

BIN
docs/output_56_8.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 13 KiB

BIN
docs/output_56_9.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 13 KiB

BIN
docs/output_61_0.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 15 KiB

BIN
docs/output_63_1.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 14 KiB

BIN
docs/output_66_1.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 14 KiB

BIN
docs/output_67_1.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 5.3 KiB

BIN
docs/output_72_0.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 17 KiB

BIN
docs/output_72_1.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 17 KiB

BIN
docs/output_74_1.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 16 KiB

BIN
docs/output_75_1.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 11 KiB

BIN
docs/output_76_1.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 19 KiB

View File

@@ -1,194 +1,93 @@
Quickstart
----------
This section shows how to run simulations from simulation configuration files.
First of all, you need to install the package (See :doc:`installation`)
This section shows how to run your first simulation with Soil.
For installation instructions, see :doc:`installation`.
Simulation configuration files are ``json`` or ``yaml`` files that define all the parameters of a simulation.
Here's an example (``example.yml``).
.. code:: yaml
---
name: MyExampleSimulation
max_time: 50
num_trials: 3
timeout: 2
network_params:
network_type: barabasi_albert_graph
n: 100
m: 2
agent_distribution:
- agent_type: SISaModel
weight: 1
state:
id: content
- agent_type: SISaModel
weight: 1
state:
id: discontent
- agent_type: SISaModel
weight: 8
state:
id: neutral
environment_params:
prob_infect: 0.075
Now run the simulation with the command line tool:
.. code:: bash
soil example.yml
Once the simulation finishes, its results will be stored in a folder named ``MyExampleSimulation``.
Four types of objects are saved by default: a pickle of the simulation, a ``YAML`` representation of the simulation (to re-launch it), for every trial, a csv file with the content of the state of every network node and the environment parameters at every step of the simulation as well as the network in gephi format (``gexf``).
There are mainly two parts in a simulation: agent classes and simulation configuration.
An agent class defines how the agent will behave throughout the simulation.
The configuration includes things such as number of agents to use and their type, network topology to use, etc.
.. code::
soil_output
├── Sim_prob_0
│   ├── Sim_prob_0.dumped.yml
│   ├── Sim_prob_0.simulation.pickle
│   ├── Sim_prob_0_trial_0.environment.csv
│   └── Sim_prob_0_trial_0.gexf
.. image:: soil.png
:width: 80%
:align: center
This example configuration will run three trials of a simulation containing a randomly generated network.
The 100 nodes in the network will be SISaModel agents, 10% of them will start in the content state, 10% in the discontent state, and the remaining 80% in the neutral state.
All agents will have access to the environment, which only contains one variable, ``prob_infected``.
The state of the agents will be updated every 2 seconds (``timeout``).
Soil includes several agent classes in the ``soil.agents`` module, and we will use them in this quickstart.
If you are interested in developing your own agents classes, see :doc:`soil_tutorial`.
Configuration
=============
To get you started, we will use this configuration (:download:`download the file <quickstart.yml>` directly):
Network
=======
The network topology for the simulation can be loaded from an existing network file or generated with one of the random network generation methods from networkx.
Loading a network
#################
To load an existing network, specify its path in the configuration:
.. code:: yaml
---
network_params:
path: /tmp/mynetwork.gexf
Soil will try to guess what networkx method to use to read the file based on its extension.
However, we only test using ``gexf`` files.
Generating a random network
###########################
To generate a random network using one of networkx's built-in methods, specify the `graph generation algorithm <https://networkx.github.io/documentation/development/reference/generators.html>`_ and other parameters.
For example, the following configuration is equivalent to :code:`nx.complete_graph(100)`:
.. code:: yaml
network_params:
network_type: complete_graph
n: 100
Environment
============
The environment is the place where the shared state of the simulation is stored.
For instance, the probability of certain events.
The configuration file may specify the initial value of the environment parameters:
.. code:: yaml
environment_params:
daily_probability_of_earthquake: 0.001
number_of_earthquakes: 0
Agents
======
Agents are a way of modelling behavior.
Agents can be characterized with two variables: an agent type (``agent_type``) and its state.
Only one agent is executed at a time (generally, every ``timeout`` seconds), and it has access to its state and the environment parameters.
Through the environment, it can access the network topology and the state of other agents.
There are three three types of agents according to how they are added to the simulation: network agents, environment agent, and other agents.
Network Agents
##############
Network agents are attached to a node in the topology.
The configuration file allows you to specify how agents will be mapped to topology nodes.
The simplest way is to specify a single type of agent.
Hence, every node in the network will have an associated agent of that type.
.. code:: yaml
agent_type: SISaModel
It is also possible to add more than one type of agent to the simulation, and to control the ratio of each type (``weight``).
For instance, with following configuration, it is five times more likely for a node to be assigned a CounterModel type than a SISaModel type.
.. code:: yaml
agent_distribution:
- agent_type: SISaModel
weight: 1
- agent_type: CounterModel
weight: 5
In addition to agent type, you may also add a custom initial state to the distribution.
This is very useful to add the same agent type with different states.
e.g., to populate the network with SISaModel, roughly 10% of them with a discontent state:
.. code:: yaml
agent_distribution:
- agent_type: SISaModel
weight: 9
state:
id: neutral
- agent_type: SISaModel
weight: 1
state:
id: discontent
Lastly, the configuration may include initial state for one or more nodes.
For instance, to add a state for the two nodes in this configuration:
.. code:: yaml
agent_type: SISaModel
network:
network_type: complete_graph
n: 2
states:
- id: content
- id: discontent
Or to add state only to specific nodes (by ``id``).
For example, to apply special skills to Linux Torvalds in a simulation:
.. literalinclude:: ../examples/torvalds.yml
.. literalinclude:: quickstart.yml
:language: yaml
The agent type used, SISa, is a very simple model.
It only has three states (neutral, content and discontent),
Its parameters are the probabilities to change from one state to another, either spontaneously or because of contagion from neighboring agents.
Environment Agents
##################
In addition to network agents, more agents can be added to the simulation.
These agens are programmed in much the same way as network agents, the only difference is that they will not be assigned to network nodes.
Running the simulation
======================
To see the simulation in action, simply point soil to the configuration, and tell it to store the graph and the history of agent states and environment parameters at every point.
.. code::
environment_agents:
- agent_type: MyAgent
state:
mood: happy
- agent_type: DummyAgent
soil --graph --csv quickstart.yml [13:35:29]
INFO:soil:Using config(s): quickstart
INFO:soil:Dumping results to soil_output/quickstart : ['csv', 'gexf']
INFO:soil:Starting simulation quickstart at 13:35:30.
INFO:soil:Starting Simulation quickstart trial 0 at 13:35:30.
INFO:soil:Finished Simulation quickstart trial 0 at 13:35:49 in 19.43677067756653 seconds
INFO:soil:Starting Dumping simulation quickstart trial 0 at 13:35:49.
INFO:soil:Finished Dumping simulation quickstart trial 0 at 13:35:51 in 1.7733407020568848 seconds
INFO:soil:Dumping results to soil_output/quickstart
INFO:soil:Finished simulation quickstart at 13:35:51 in 21.29862952232361 seconds
Visualizing the results
=======================
The ``CSV`` file should look like this:
The simulation will return a dynamic graph .gexf file which could be visualized with
.. code::
agent_id,t_step,key,value
env,0,neutral_discontent_spon_prob,0.05
env,0,neutral_discontent_infected_prob,0.1
env,0,neutral_content_spon_prob,0.2
env,0,neutral_content_infected_prob,0.4
env,0,discontent_neutral,0.2
env,0,discontent_content,0.05
env,0,content_discontent,0.05
env,0,variance_d_c,0.05
env,0,variance_c_d,0.1
Results and visualization
=========================
The environment variables are marked as ``agent_id`` env.
Th exported values are only stored when they change.
To find out how to get every key and value at every point in the simulation, check out the :doc:`soil_tutorial`.
The dynamic graph is exported as a .gexf file which could be visualized with
`Gephi <https://gephi.org/users/download/>`__.
Now it is your turn to experiment with the simulation.
Change some of the parameters, such as the number of agents, the probability of becoming content, or the type of network, and see how the results change.
Soil also includes a web server that allows you to upload your simulations, change parameters, and visualize the results, including a timeline of the network.
To make it work, you have to install soil like this:
.. code::
pip install soil[web]
Once installed, the soil web UI can be run in two ways:
.. code::
soil-web
# OR
python -m soil.web

30
docs/quickstart.yml Normal file
View File

@@ -0,0 +1,30 @@
---
name: quickstart
num_trials: 1
max_time: 1000
network_agents:
- agent_class: SISaModel
state:
id: neutral
weight: 1
- agent_class: SISaModel
state:
id: content
weight: 2
network_params:
n: 100
k: 5
p: 0.2
generator: newman_watts_strogatz_graph
environment_params:
neutral_discontent_spon_prob: 0.05
neutral_discontent_infected_prob: 0.1
neutral_content_spon_prob: 0.2
neutral_content_infected_prob: 0.4
discontent_neutral: 0.2
discontent_content: 0.05
content_discontent: 0.05
variance_d_c: 0.05
variance_c_d: 0.1
content_neutral: 0.1
standard_variance: 0.1

1
docs/requirements.txt Normal file
View File

@@ -0,0 +1 @@
ipython>=7.31.1

12
docs/soil-vs.rst Normal file
View File

@@ -0,0 +1,12 @@
### MESA
Starting with version 0.3, Soil has been redesigned to complement Mesa, while remaining compatible with it.
That means that every component in Soil (i.e., Models, Environments, etc.) can be mixed with existing mesa components.
In fact, there are examples that show how that integration may be used, in the `examples/mesa` folder in the repository.
Here are some reasons to use Soil instead of plain mesa:
- Less boilerplate for common scenarios (by some definitions of common)
- Functions to automatically populate a topology with an agent distribution (i.e., different ratios of agent class and state)
- The `soil.Simulation` class allows you to run multiple instances of the same experiment (i.e., multiple trials with the same parameters but a different randomness seed)
- Reporting functions that aggregate multiple

BIN
docs/soil.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 43 KiB

2606
docs/soil_tutorial.rst Normal file

File diff suppressed because it is too large Load Diff

532
examples/NewsSpread.ipynb Normal file

File diff suppressed because one or more lines are too long

80808
examples/Untitled.ipynb Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -1,24 +1,54 @@
---
version: '2'
name: simple
group: tests
dir_path: "/tmp/"
num_trials: 3
max_time: 100
max_steps: 100
interval: 1
network_params:
generator: complete_graph
n: 10
network_agents:
- agent_type: CounterModel
weight: 1
state:
id: 0
- agent_type: AggregatedCounter
weight: 0.2
environment_agents: []
environment_params:
seed: "CompleteSeed!"
model_class: Environment
model_params:
am_i_complete: true
default_state:
incidents: 0
states:
- name: 'The first node'
- name: 'The second node'
topology:
params:
generator: complete_graph
n: 12
environment:
agents:
agent_class: CounterModel
topology: true
state:
times: 1
# In this group we are not specifying any topology
fixed:
- name: 'Environment Agent 1'
agent_class: BaseAgent
group: environment
topology: false
hidden: true
state:
times: 10
- agent_class: CounterModel
id: 0
group: fixed_counters
state:
times: 1
total: 0
- agent_class: CounterModel
group: fixed_counters
id: 1
distribution:
- agent_class: CounterModel
weight: 1
group: distro_counters
state:
times: 3
- agent_class: AggregatedCounter
weight: 0.2
override:
- filter:
agent_class: AggregatedCounter
n: 2
state:
times: 5

View File

@@ -1,17 +0,0 @@
default_state: {}
environment_agents: []
environment_params: {prob_neighbor_spread: 0.0, prob_tv_spread: 0.01}
interval: 1
max_time: 20
name: Sim_prob_0
network_agents:
- agent_type: NewsSpread
state: {has_tv: false}
weight: 1
- agent_type: NewsSpread
state: {has_tv: true}
weight: 2
network_params: {generator: erdos_renyi_graph, n: 500, p: 0.1}
num_trials: 1
states:
- {has_tv: true}

View File

@@ -1,20 +0,0 @@
import soil
import random
class NewsSpread(soil.agents.FSM):
@soil.agents.default_state
@soil.agents.state
def neutral(self):
r = random.random()
if self['has_tv'] and r < self.env['prob_tv_spread']:
return self.infected
return
@soil.agents.state
def infected(self):
prob_infect = self.env['prob_neighbor_spread']
for neighbor in self.get_neighboring_agents(state_id=self.neutral.id):
r = random.random()
if r < prob_infect:
neighbor.state['id'] = self.infected.id
return

View File

@@ -0,0 +1,16 @@
---
name: custom-generator
description: Using a custom generator for the network
num_trials: 3
max_steps: 100
interval: 1
network_params:
generator: mymodule.mygenerator
# These are custom parameters
n: 10
n_edges: 5
network_agents:
- agent_class: CounterModel
weight: 1
state:
state_id: 0

View File

@@ -0,0 +1,22 @@
from networkx import Graph
import random
import networkx as nx
def mygenerator(n=5, n_edges=5):
"""
Just a simple generator that creates a network with n nodes and
n_edges edges. Edges are assigned randomly, only avoiding self loops.
"""
G = nx.Graph()
for i in range(n):
G.add_node(i)
for i in range(n_edges):
nodes = list(G.nodes)
n_in = random.choice(nodes)
nodes.remove(n_in) # Avoid loops
n_out = random.choice(nodes)
G.add_edge(n_in, n_out)
return G

View File

@@ -0,0 +1,38 @@
from soil.agents import FSM, state, default_state
class Fibonacci(FSM):
"""Agent that only executes in t_steps that are Fibonacci numbers"""
defaults = {"prev": 1}
@default_state
@state
def counting(self):
self.log("Stopping at {}".format(self.now))
prev, self["prev"] = self["prev"], max([self.now, self["prev"]])
return None, self.env.timeout(prev)
class Odds(FSM):
"""Agent that only executes in odd t_steps"""
@default_state
@state
def odds(self):
self.log("Stopping at {}".format(self.now))
return None, self.env.timeout(1 + self.now % 2)
if __name__ == "__main__":
from soil import Simulation
s = Simulation(
network_agents=[
{"ids": [0], "agent_class": Fibonacci},
{"ids": [1], "agent_class": Odds},
],
network_params={"generator": "complete_graph", "n": 2},
max_time=100,
)
s.run(dry_run=True)

19
examples/mesa/mesa.yml Normal file
View File

@@ -0,0 +1,19 @@
---
name: mesa_sim
group: tests
dir_path: "/tmp"
num_trials: 3
max_steps: 100
interval: 1
seed: '1'
model_class: social_wealth.MoneyEnv
model_params:
generator: social_wealth.graph_generator
agents:
topology: true
distribution:
- agent_class: social_wealth.SocialMoneyAgent
weight: 1
N: 10
width: 50
height: 50

114
examples/mesa/server.py Normal file
View File

@@ -0,0 +1,114 @@
from mesa.visualization.ModularVisualization import ModularServer
from soil.visualization import UserSettableParameter
from mesa.visualization.modules import ChartModule, NetworkModule, CanvasGrid
from social_wealth import MoneyEnv, graph_generator, SocialMoneyAgent
import networkx as nx
class MyNetwork(NetworkModule):
def render(self, model):
return self.portrayal_method(model)
def network_portrayal(env):
# The model ensures there is 0 or 1 agent per node
portrayal = dict()
wealths = {
node_id: data["agent"].wealth for (node_id, data) in env.G.nodes(data=True)
}
portrayal["nodes"] = [
{
"id": node_id,
"size": 2 * (wealth + 1),
"color": "#CC0000" if wealth == 0 else "#007959",
# "color": "#CC0000",
"label": f"{node_id}: {wealth}",
}
for (node_id, wealth) in wealths.items()
]
portrayal["edges"] = [
{"id": edge_id, "source": source, "target": target, "color": "#000000"}
for edge_id, (source, target) in enumerate(env.G.edges)
]
return portrayal
def gridPortrayal(agent):
"""
This function is registered with the visualization server to be called
each tick to indicate how to draw the agent in its current state.
:param agent: the agent in the simulation
:return: the portrayal dictionary
"""
color = max(10, min(agent.wealth * 10, 100))
return {
"Shape": "rect",
"w": 1,
"h": 1,
"Filled": "true",
"Layer": 0,
"Label": agent.unique_id,
"Text": agent.unique_id,
"x": agent.pos[0],
"y": agent.pos[1],
"Color": f"rgba(31, 10, 255, 0.{color})",
}
grid = MyNetwork(network_portrayal, 500, 500)
chart = ChartModule(
[{"Label": "Gini", "Color": "Black"}], data_collector_name="datacollector"
)
model_params = {
"N": UserSettableParameter(
"slider",
"N",
5,
1,
10,
1,
description="Choose how many agents to include in the model",
),
"height": UserSettableParameter(
"slider",
"height",
5,
5,
10,
1,
description="Grid height",
),
"width": UserSettableParameter(
"slider",
"width",
5,
5,
10,
1,
description="Grid width",
),
"agent_class": UserSettableParameter(
"choice",
"Agent class",
value="MoneyAgent",
choices=["MoneyAgent", "SocialMoneyAgent"],
),
"generator": graph_generator,
}
canvas_element = CanvasGrid(
gridPortrayal, model_params["width"].value, model_params["height"].value, 500, 500
)
server = ModularServer(
MoneyEnv, [grid, chart, canvas_element], "Money Model", model_params
)
server.port = 8521
server.launch(open_browser=False)

View File

@@ -0,0 +1,137 @@
"""
This is an example that adds soil agents and environment in a normal
mesa workflow.
"""
from mesa import Agent as MesaAgent
from mesa.space import MultiGrid
# from mesa.time import RandomActivation
from mesa.datacollection import DataCollector
from mesa.batchrunner import BatchRunner
import networkx as nx
from soil import NetworkAgent, Environment, serialization
def compute_gini(model):
agent_wealths = [agent.wealth for agent in model.agents]
x = sorted(agent_wealths)
N = len(list(model.agents))
B = sum(xi * (N - i) for i, xi in enumerate(x)) / (N * sum(x))
return 1 + (1 / N) - 2 * B
class MoneyAgent(MesaAgent):
"""
A MESA agent with fixed initial wealth.
It will only share wealth with neighbors based on grid proximity
"""
def __init__(self, unique_id, model, wealth=1):
super().__init__(unique_id=unique_id, model=model)
self.wealth = wealth
def move(self):
possible_steps = self.model.grid.get_neighborhood(
self.pos, moore=True, include_center=False
)
new_position = self.random.choice(possible_steps)
self.model.grid.move_agent(self, new_position)
def give_money(self):
cellmates = self.model.grid.get_cell_list_contents([self.pos])
if len(cellmates) > 1:
other = self.random.choice(cellmates)
other.wealth += 1
self.wealth -= 1
def step(self):
print("Crying wolf", self.pos)
self.move()
if self.wealth > 0:
self.give_money()
class SocialMoneyAgent(NetworkAgent, MoneyAgent):
wealth = 1
def give_money(self):
cellmates = set(self.model.grid.get_cell_list_contents([self.pos]))
friends = set(self.get_neighboring_agents())
self.info("Trying to give money")
self.info("Cellmates: ", cellmates)
self.info("Friends: ", friends)
nearby_friends = list(cellmates & friends)
if len(nearby_friends):
other = self.random.choice(nearby_friends)
other.wealth += 1
self.wealth -= 1
def graph_generator(n=5):
G = nx.Graph()
for ix in range(n):
G.add_edge(0, ix)
return G
class MoneyEnv(Environment):
"""A model with some number of agents."""
def __init__(
self,
width,
height,
N,
generator=graph_generator,
agent_class=SocialMoneyAgent,
topology=None,
**kwargs
):
generator = serialization.deserialize(generator)
agent_class = serialization.deserialize(agent_class, globs=globals())
topology = generator(n=N)
super().__init__(topology=topology, N=N, **kwargs)
self.grid = MultiGrid(width, height, False)
self.populate_network(agent_class=agent_class)
# Create agents
for agent in self.agents:
x = self.random.randrange(self.grid.width)
y = self.random.randrange(self.grid.height)
self.grid.place_agent(agent, (x, y))
self.datacollector = DataCollector(
model_reporters={"Gini": compute_gini}, agent_reporters={"Wealth": "wealth"}
)
if __name__ == "__main__":
fixed_params = {
"generator": nx.complete_graph,
"width": 10,
"network_agents": [{"agent_class": SocialMoneyAgent, "weight": 1}],
"height": 10,
}
variable_params = {"N": range(10, 100, 10)}
batch_run = BatchRunner(
MoneyEnv,
variable_parameters=variable_params,
fixed_parameters=fixed_params,
iterations=5,
max_steps=100,
model_reporters={"Gini": compute_gini},
)
batch_run.run_all()
run_data = batch_run.get_model_vars_dataframe()
run_data.head()
print(run_data.Gini)

87
examples/mesa/wealth.py Normal file
View File

@@ -0,0 +1,87 @@
from mesa import Agent, Model
from mesa.space import MultiGrid
from mesa.time import RandomActivation
from mesa.datacollection import DataCollector
from mesa.batchrunner import BatchRunner
def compute_gini(model):
agent_wealths = [agent.wealth for agent in model.schedule.agents]
x = sorted(agent_wealths)
N = model.num_agents
B = sum(xi * (N - i) for i, xi in enumerate(x)) / (N * sum(x))
return 1 + (1 / N) - 2 * B
class MoneyAgent(Agent):
"""An agent with fixed initial wealth."""
def __init__(self, unique_id, model):
super().__init__(unique_id, model)
self.wealth = 1
def move(self):
possible_steps = self.model.grid.get_neighborhood(
self.pos, moore=True, include_center=False
)
new_position = self.random.choice(possible_steps)
self.model.grid.move_agent(self, new_position)
def give_money(self):
cellmates = self.model.grid.get_cell_list_contents([self.pos])
if len(cellmates) > 1:
other = self.random.choice(cellmates)
other.wealth += 1
self.wealth -= 1
def step(self):
self.move()
if self.wealth > 0:
self.give_money()
class MoneyModel(Model):
"""A model with some number of agents."""
def __init__(self, N, width, height):
self.num_agents = N
self.grid = MultiGrid(width, height, True)
self.schedule = RandomActivation(self)
self.running = True
# Create agents
for i in range(self.num_agents):
a = MoneyAgent(i, self)
self.schedule.add(a)
# Add the agent to a random grid cell
x = self.random.randrange(self.grid.width)
y = self.random.randrange(self.grid.height)
self.grid.place_agent(a, (x, y))
self.datacollector = DataCollector(
model_reporters={"Gini": compute_gini}, agent_reporters={"Wealth": "wealth"}
)
def step(self):
self.datacollector.collect(self)
self.schedule.step()
if __name__ == "__main__":
fixed_params = {"width": 10, "height": 10}
variable_params = {"N": range(10, 500, 10)}
batch_run = BatchRunner(
MoneyModel,
variable_params,
fixed_params,
iterations=5,
max_steps=100,
model_reporters={"Gini": compute_gini},
)
batch_run.run_all()
run_data = batch_run.get_model_vars_dataframe()
run_data.head()
print(run_data.Gini)

File diff suppressed because one or more lines are too long

View File

@@ -0,0 +1,133 @@
---
default_state: {}
environment_agents: []
environment_params:
prob_neighbor_spread: 0.0
prob_tv_spread: 0.01
interval: 1
max_steps: 300
name: Sim_all_dumb
network_agents:
- agent_class: newsspread.DumbViewer
state:
has_tv: false
weight: 1
- agent_class: newsspread.DumbViewer
state:
has_tv: true
weight: 1
network_params:
generator: barabasi_albert_graph
n: 500
m: 5
num_trials: 50
---
default_state: {}
environment_agents: []
environment_params:
prob_neighbor_spread: 0.0
prob_tv_spread: 0.01
interval: 1
max_steps: 300
name: Sim_half_herd
network_agents:
- agent_class: newsspread.DumbViewer
state:
has_tv: false
weight: 1
- agent_class: newsspread.DumbViewer
state:
has_tv: true
weight: 1
- agent_class: newsspread.HerdViewer
state:
has_tv: false
weight: 1
- agent_class: newsspread.HerdViewer
state:
has_tv: true
weight: 1
network_params:
generator: barabasi_albert_graph
n: 500
m: 5
num_trials: 50
---
default_state: {}
environment_agents: []
environment_params:
prob_neighbor_spread: 0.0
prob_tv_spread: 0.01
interval: 1
max_steps: 300
name: Sim_all_herd
network_agents:
- agent_class: newsspread.HerdViewer
state:
has_tv: true
state_id: neutral
weight: 1
- agent_class: newsspread.HerdViewer
state:
has_tv: true
state_id: neutral
weight: 1
network_params:
generator: barabasi_albert_graph
n: 500
m: 5
num_trials: 50
---
default_state: {}
environment_agents: []
environment_params:
prob_neighbor_spread: 0.0
prob_tv_spread: 0.01
prob_neighbor_cure: 0.1
interval: 1
max_steps: 300
name: Sim_wise_herd
network_agents:
- agent_class: newsspread.HerdViewer
state:
has_tv: true
state_id: neutral
weight: 1
- agent_class: newsspread.WiseViewer
state:
has_tv: true
weight: 1
network_params:
generator: barabasi_albert_graph
n: 500
m: 5
num_trials: 50
---
default_state: {}
environment_agents: []
environment_params:
prob_neighbor_spread: 0.0
prob_tv_spread: 0.01
prob_neighbor_cure: 0.1
interval: 1
max_steps: 300
name: Sim_all_wise
network_agents:
- agent_class: newsspread.WiseViewer
state:
has_tv: true
state_id: neutral
weight: 1
- agent_class: newsspread.WiseViewer
state:
has_tv: true
weight: 1
network_params:
generator: barabasi_albert_graph
n: 500
m: 5
network_params:
generator: barabasi_albert_graph
n: 500
m: 5
num_trials: 50

View File

@@ -0,0 +1,85 @@
from soil.agents import FSM, NetworkAgent, state, default_state, prob
import logging
class DumbViewer(FSM, NetworkAgent):
"""
A viewer that gets infected via TV (if it has one) and tries to infect
its neighbors once it's infected.
"""
defaults = {
"prob_neighbor_spread": 0.5,
"prob_tv_spread": 0.1,
}
@default_state
@state
def neutral(self):
if self["has_tv"]:
if self.prob(self.model["prob_tv_spread"]):
return self.infected
@state
def infected(self):
for neighbor in self.get_neighboring_agents(state_id=self.neutral.id):
if self.prob(self.model["prob_neighbor_spread"]):
neighbor.infect()
def infect(self):
"""
This is not a state. It is a function that other agents can use to try to
infect this agent. DumbViewer always gets infected, but other agents like
HerdViewer might not become infected right away
"""
self.set_state(self.infected)
class HerdViewer(DumbViewer):
"""
A viewer whose probability of infection depends on the state of its neighbors.
"""
def infect(self):
"""Notice again that this is NOT a state. See DumbViewer.infect for reference"""
infected = self.count_neighboring_agents(state_id=self.infected.id)
total = self.count_neighboring_agents()
prob_infect = self.model["prob_neighbor_spread"] * infected / total
self.debug("prob_infect", prob_infect)
if self.prob(prob_infect):
self.set_state(self.infected)
class WiseViewer(HerdViewer):
"""
A viewer that can change its mind.
"""
defaults = {
"prob_neighbor_spread": 0.5,
"prob_neighbor_cure": 0.25,
"prob_tv_spread": 0.1,
}
@state
def cured(self):
prob_cure = self.model["prob_neighbor_cure"]
for neighbor in self.get_neighboring_agents(state_id=self.infected.id):
if self.prob(prob_cure):
try:
neighbor.cure()
except AttributeError:
self.debug("Viewer {} cannot be cured".format(neighbor.id))
def cure(self):
self.set_state(self.cured.id)
@state
def infected(self):
cured = max(self.count_neighboring_agents(self.cured.id), 1.0)
infected = max(self.count_neighboring_agents(self.infected.id), 1.0)
prob_cure = self.model["prob_neighbor_cure"] * (cured / infected)
if self.prob(prob_cure):
return self.cured
return self.set_state(super().infected)

1
examples/programmatic/.gitignore vendored Normal file
View File

@@ -0,0 +1 @@
Programmatic*

View File

@@ -0,0 +1,41 @@
"""
Example of a fully programmatic simulation, without definition files.
"""
from soil import Simulation, agents
from networkx import Graph
import logging
def mygenerator():
# Add only a node
G = Graph()
G.add_node(1)
return G
class MyAgent(agents.FSM):
@agents.default_state
@agents.state
def neutral(self):
self.debug("I am running")
if agents.prob(0.2):
self.info("This runs 2/10 times on average")
s = Simulation(
name="Programmatic",
network_params={"generator": mygenerator},
num_trials=1,
max_time=100,
agent_class=MyAgent,
dry_run=True,
)
# By default, logging will only print WARNING logs (and above).
# You need to choose a lower logging level to get INFO/DEBUG traces
logging.basicConfig(level=logging.INFO)
envs = s.run()
# Uncomment this to output the simulation to a YAML file
# s.dump_yaml('simulation.yaml')

View File

@@ -0,0 +1,10 @@
Simulation of pubs and drinking pals that go from pub to pub.
Th custom environment includes a list of pubs and methods to allow agents to discover and enter pubs.
There are two types of agents:
* Patron. A patron will do three things, in this order:
* Look for other patrons to drink with
* Look for a pub where the agent and other agents in the same group can get in.
* While in the pub, patrons only drink, until they get drunk and taken home.
* Police. There is only one police agent that will take any drunk patrons home (kick them out of the pub).

View File

@@ -0,0 +1,175 @@
from soil.agents import FSM, NetworkAgent, state, default_state
from soil import Environment
from itertools import islice
import logging
class CityPubs(Environment):
"""Environment with Pubs"""
level = logging.INFO
def __init__(self, *args, number_of_pubs=3, pub_capacity=10, **kwargs):
super(CityPubs, self).__init__(*args, **kwargs)
pubs = {}
for i in range(number_of_pubs):
newpub = {
"name": "The awesome pub #{}".format(i),
"open": True,
"capacity": pub_capacity,
"occupancy": 0,
}
pubs[newpub["name"]] = newpub
self["pubs"] = pubs
def enter(self, pub_id, *nodes):
"""Agents will try to enter. The pub checks if it is possible"""
try:
pub = self["pubs"][pub_id]
except KeyError:
raise ValueError("Pub {} is not available".format(pub_id))
if not pub["open"] or (pub["capacity"] < (len(nodes) + pub["occupancy"])):
return False
pub["occupancy"] += len(nodes)
for node in nodes:
node["pub"] = pub_id
return True
def available_pubs(self):
for pub in self["pubs"].values():
if pub["open"] and (pub["occupancy"] < pub["capacity"]):
yield pub["name"]
def exit(self, pub_id, *node_ids):
"""Agents will notify the pub they want to leave"""
try:
pub = self["pubs"][pub_id]
except KeyError:
raise ValueError("Pub {} is not available".format(pub_id))
for node_id in node_ids:
node = self.get_agent(node_id)
if pub_id == node["pub"]:
del node["pub"]
pub["occupancy"] -= 1
class Patron(FSM, NetworkAgent):
"""Agent that looks for friends to drink with. It will do three things:
1) Look for other patrons to drink with
2) Look for a bar where the agent and other agents in the same group can get in.
3) While in the bar, patrons only drink, until they get drunk and taken home.
"""
level = logging.DEBUG
pub = None
drunk = False
pints = 0
max_pints = 3
kicked_out = False
@default_state
@state
def looking_for_friends(self):
"""Look for friends to drink with"""
self.info("I am looking for friends")
available_friends = list(
self.get_agents(drunk=False, pub=None, state_id=self.looking_for_friends.id)
)
if not available_friends:
self.info("Life sucks and I'm alone!")
return self.at_home
befriended = self.try_friends(available_friends)
if befriended:
return self.looking_for_pub
@state
def looking_for_pub(self):
"""Look for a pub that accepts me and my friends"""
if self["pub"] != None:
return self.sober_in_pub
self.debug("I am looking for a pub")
group = list(self.get_neighboring_agents())
for pub in self.model.available_pubs():
self.debug("We're trying to get into {}: total: {}".format(pub, len(group)))
if self.model.enter(pub, self, *group):
self.info("We're all {} getting in {}!".format(len(group), pub))
return self.sober_in_pub
@state
def sober_in_pub(self):
"""Drink up."""
self.drink()
if self["pints"] > self["max_pints"]:
return self.drunk_in_pub
@state
def drunk_in_pub(self):
"""I'm out. Take me home!"""
self.info("I'm so drunk. Take me home!")
self["drunk"] = True
if self.kicked_out:
return self.at_home
pass # out drun
@state
def at_home(self):
"""The end"""
others = self.get_agents(state_id=Patron.at_home.id, limit_neighbors=True)
self.debug("I'm home. Just like {} of my friends".format(len(others)))
def drink(self):
self["pints"] += 1
self.debug("Cheers to that")
def kick_out(self):
self.kicked_out = True
def befriend(self, other_agent, force=False):
"""
Try to become friends with another agent. The chances of
success depend on both agents' openness.
"""
if force or self["openness"] > self.random.random():
self.add_edge(self, other_agent)
self.info("Made some friend {}".format(other_agent))
return True
return False
def try_friends(self, others):
"""Look for random agents around me and try to befriend them"""
befriended = False
k = int(10 * self["openness"])
self.random.shuffle(others)
for friend in islice(others, k): # random.choice >= 3.7
if friend == self:
continue
if friend.befriend(self):
self.befriend(friend, force=True)
self.debug("Hooray! new friend: {}".format(friend.id))
befriended = True
else:
self.debug("{} does not want to be friends".format(friend.id))
return befriended
class Police(FSM):
"""Simple agent to take drunk people out of pubs."""
level = logging.INFO
@default_state
@state
def patrol(self):
drunksters = list(self.get_agents(drunk=True, state_id=Patron.drunk_in_pub.id))
for drunk in drunksters:
self.info("Kicking out the trash: {}".format(drunk.id))
drunk.kick_out()
else:
self.info("No trash to take out. Too bad.")
if __name__ == "__main__":
from soil import simulation
simulation.run_from_config("pubcrawl.yml", dry_run=True, dump=None, parallel=False)

View File

@@ -0,0 +1,26 @@
---
name: pubcrawl
num_trials: 3
max_steps: 10
dump: false
network_params:
# Generate 100 empty nodes. They will be assigned a network agent
generator: empty_graph
n: 30
network_agents:
- agent_class: pubcrawl.Patron
description: Extroverted patron
state:
openness: 1.0
weight: 9
- agent_class: pubcrawl.Patron
description: Introverted patron
state:
openness: 0.1
weight: 1
environment_agents:
- agent_class: pubcrawl.Police
environment_class: pubcrawl.CityPubs
environment_params:
altercations: 0
number_of_pubs: 3

View File

@@ -0,0 +1,14 @@
There are two similar implementations of this simulation.
- `basic`. Using simple primites
- `improved`. Using more advanced features such as the `time` module to avoid unnecessary computations (i.e., skip steps), and generator functions.
The examples can be run directly in the terminal, and they accept command like arguments.
For example, to enable the CSV exporter and the Summary exporter, while setting `max_time` to `100` and `seed` to `CustomSeed`:
```
python rabbit_agents.py --set max_time=100 --csv -e summary --set 'seed="CustomSeed"'
```
To learn more about how this functionality works, check out the `soil.easy` function.

View File

@@ -0,0 +1,150 @@
from soil import FSM, state, default_state, BaseAgent, NetworkAgent, Environment
from collections import Counter
import logging
import math
class RabbitEnv(Environment):
@property
def num_rabbits(self):
return self.count_agents(agent_class=Rabbit)
@property
def num_males(self):
return self.count_agents(agent_class=Male)
@property
def num_females(self):
return self.count_agents(agent_class=Female)
class Rabbit(NetworkAgent, FSM):
sexual_maturity = 30
life_expectancy = 300
@default_state
@state
def newborn(self):
self.info("I am a newborn.")
self.age = 0
self.offspring = 0
return self.youngling
@state
def youngling(self):
self.age += 1
if self.age >= self.sexual_maturity:
self.info(f"I am fertile! My age is {self.age}")
return self.fertile
@state
def fertile(self):
raise Exception("Each subclass should define its fertile state")
@state
def dead(self):
self.die()
class Male(Rabbit):
max_females = 5
mating_prob = 0.001
@state
def fertile(self):
self.age += 1
if self.age > self.life_expectancy:
return self.dead
# Males try to mate
for f in self.model.agents(
agent_class=Female, state_id=Female.fertile.id, limit=self.max_females
):
self.debug("FOUND A FEMALE: ", repr(f), self.mating_prob)
if self.prob(self["mating_prob"]):
f.impregnate(self)
break # Take a break
class Female(Rabbit):
gestation = 10
pregnancy = -1
@state
def fertile(self):
# Just wait for a Male
self.age += 1
if self.age > self.life_expectancy:
return self.dead
if self.pregnancy >= 0:
return self.pregnant
def impregnate(self, male):
self.info(f"impregnated by {repr(male)}")
self.mate = male
self.pregnancy = 0
self.number_of_babies = int(8 + 4 * self.random.random())
@state
def pregnant(self):
self.info("I am pregnant")
self.age += 1
if self.age >= self.life_expectancy:
return self.die()
if self.pregnancy < self.gestation:
self.pregnancy += 1
return
self.info("Having {} babies".format(self.number_of_babies))
for i in range(self.number_of_babies):
state = {}
agent_class = self.random.choice([Male, Female])
child = self.model.add_node(agent_class=agent_class, **state)
child.add_edge(self)
try:
child.add_edge(self.mate)
self.model.agents[self.mate].offspring += 1
except ValueError:
self.debug("The father has passed away")
self.offspring += 1
self.mate = None
self.pregnancy = -1
return self.fertile
def die(self):
if "pregnancy" in self and self["pregnancy"] > -1:
self.info("A mother has died carrying a baby!!")
return super().die()
class RandomAccident(BaseAgent):
def step(self):
rabbits_alive = self.model.G.number_of_nodes()
if not rabbits_alive:
return self.die()
prob_death = self.model.get("prob_death", 1e-100) * math.floor(
math.log10(max(1, rabbits_alive))
)
self.debug("Killing some rabbits with prob={}!".format(prob_death))
for i in self.iter_agents(agent_class=Rabbit):
if i.state_id == i.dead.id:
continue
if self.prob(prob_death):
self.info("I killed a rabbit: {}".format(i.id))
rabbits_alive -= 1
i.die()
self.debug("Rabbits alive: {}".format(rabbits_alive))
if __name__ == "__main__":
from soil import easy
with easy("rabbits.yml") as sim:
sim.run()

View File

@@ -0,0 +1,42 @@
---
version: '2'
name: rabbits_basic
num_trials: 1
seed: MySeed
description: null
group: null
interval: 1.0
max_time: 100
model_class: rabbit_agents.RabbitEnv
model_params:
agents:
topology: true
distribution:
- agent_class: rabbit_agents.Male
weight: 1
- agent_class: rabbit_agents.Female
weight: 1
fixed:
- agent_class: rabbit_agents.RandomAccident
topology: false
hidden: true
state:
group: environment
state:
group: network
mating_prob: 0.1
prob_death: 0.001
topology:
fixed:
directed: true
links: []
nodes:
- id: 1
- id: 0
model_reporters:
num_males: 'num_males'
num_females: 'num_females'
num_rabbits: |
py:lambda env: env.num_males + env.num_females
extra:
visualization_params: {}

View File

@@ -0,0 +1,157 @@
from soil import FSM, state, default_state, BaseAgent, NetworkAgent, Environment
from soil.time import Delta
from enum import Enum
from collections import Counter
import logging
import math
class RabbitEnv(Environment):
@property
def num_rabbits(self):
return self.count_agents(agent_class=Rabbit)
@property
def num_males(self):
return self.count_agents(agent_class=Male)
@property
def num_females(self):
return self.count_agents(agent_class=Female)
class Rabbit(FSM, NetworkAgent):
sexual_maturity = 30
life_expectancy = 300
birth = None
@property
def age(self):
if self.birth is None:
return None
return self.now - self.birth
@default_state
@state
def newborn(self):
self.info("I am a newborn.")
self.birth = self.now
self.offspring = 0
return self.youngling, Delta(self.sexual_maturity - self.age)
@state
def youngling(self):
if self.age >= self.sexual_maturity:
self.info(f"I am fertile! My age is {self.age}")
return self.fertile
@state
def fertile(self):
raise Exception("Each subclass should define its fertile state")
@state
def dead(self):
self.die()
class Male(Rabbit):
max_females = 5
mating_prob = 0.001
@state
def fertile(self):
if self.age > self.life_expectancy:
return self.dead
# Males try to mate
for f in self.model.agents(
agent_class=Female, state_id=Female.fertile.id, limit=self.max_females
):
self.debug("FOUND A FEMALE: ", repr(f), self.mating_prob)
if self.prob(self["mating_prob"]):
f.impregnate(self)
break # Do not try to impregnate other females
class Female(Rabbit):
gestation = 10
conception = None
@state
def fertile(self):
# Just wait for a Male
if self.age > self.life_expectancy:
return self.dead
if self.conception is not None:
return self.pregnant
@property
def pregnancy(self):
if self.conception is None:
return None
return self.now - self.conception
def impregnate(self, male):
self.info(f"impregnated by {repr(male)}")
self.mate = male
self.conception = self.now
self.number_of_babies = int(8 + 4 * self.random.random())
@state
def pregnant(self):
self.debug("I am pregnant")
if self.age > self.life_expectancy:
self.info("Dying before giving birth")
return self.die()
if self.pregnancy >= self.gestation:
self.info("Having {} babies".format(self.number_of_babies))
for i in range(self.number_of_babies):
state = {}
agent_class = self.random.choice([Male, Female])
child = self.model.add_node(agent_class=agent_class, **state)
child.add_edge(self)
if self.mate:
child.add_edge(self.mate)
self.mate.offspring += 1
else:
self.debug("The father has passed away")
self.offspring += 1
self.mate = None
return self.fertile
def die(self):
if self.pregnancy is not None:
self.info("A mother has died carrying a baby!!")
return super().die()
class RandomAccident(BaseAgent):
def step(self):
rabbits_alive = self.model.G.number_of_nodes()
if not rabbits_alive:
return self.die()
prob_death = self.model.get("prob_death", 1e-100) * math.floor(
math.log10(max(1, rabbits_alive))
)
self.debug("Killing some rabbits with prob={}!".format(prob_death))
for i in self.iter_agents(agent_class=Rabbit):
if i.state_id == i.dead.id:
continue
if self.prob(prob_death):
self.info("I killed a rabbit: {}".format(i.id))
rabbits_alive -= 1
i.die()
self.debug("Rabbits alive: {}".format(rabbits_alive))
if __name__ == "__main__":
from soil import easy
with easy("rabbits.yml") as sim:
sim.run()

View File

@@ -0,0 +1,42 @@
---
version: '2'
name: rabbits_improved
num_trials: 1
seed: MySeed
description: null
group: null
interval: 1.0
max_time: 100
model_class: rabbit_agents.RabbitEnv
model_params:
agents:
topology: true
distribution:
- agent_class: rabbit_agents.Male
weight: 1
- agent_class: rabbit_agents.Female
weight: 1
fixed:
- agent_class: rabbit_agents.RandomAccident
topology: false
hidden: true
state:
group: environment
state:
group: network
mating_prob: 0.1
prob_death: 0.001
topology:
fixed:
directed: true
links: []
nodes:
- id: 1
- id: 0
model_reporters:
num_males: 'num_males'
num_females: 'num_females'
num_rabbits: |
py:lambda env: env.num_males + env.num_females
extra:
visualization_params: {}

View File

@@ -0,0 +1,43 @@
"""
Example of setting a
Example of a fully programmatic simulation, without definition files.
"""
from soil import Simulation, agents
from soil.time import Delta
class MyAgent(agents.FSM):
"""
An agent that first does a ping
"""
defaults = {"pong_counts": 2}
@agents.default_state
@agents.state
def ping(self):
self.info("Ping")
return self.pong, Delta(self.random.expovariate(1 / 16))
@agents.state
def pong(self):
self.info("Pong")
self.pong_counts -= 1
self.info(str(self.pong_counts))
if self.pong_counts < 1:
return self.die()
return None, Delta(self.random.expovariate(1 / 16))
s = Simulation(
name="Programmatic",
network_agents=[{"agent_class": MyAgent, "id": 0}],
topology={"nodes": [{"id": 0}], "links": []},
num_trials=1,
max_time=100,
agent_class=MyAgent,
dry_run=True,
)
envs = s.run()

30
examples/template.yml Normal file
View File

@@ -0,0 +1,30 @@
---
sampler:
method: "SALib.sample.morris.sample"
N: 10
template:
group: simple
num_trials: 1
interval: 1
max_steps: 2
seed: "CompleteSeed!"
dump: false
model_params:
network_params:
generator: complete_graph
n: 10
network_agents:
- agent_class: CounterModel
weight: "{{ x1 }}"
state:
state_id: 0
- agent_class: AggregatedCounter
weight: "{{ 1 - x1 }}"
name: "{{ x3 }}"
skip_test: true
vars:
bounds:
x1: [0, 1]
x2: [1, 2]
fixed:
x3: ["a", "b", "c"]

View File

@@ -0,0 +1,291 @@
import networkx as nx
from soil.agents import Geo, NetworkAgent, FSM, state, default_state
from soil import Environment
class TerroristSpreadModel(FSM, Geo):
"""
Settings:
information_spread_intensity
terrorist_additional_influence
min_vulnerability (optional else zero)
max_vulnerability
prob_interaction
"""
def __init__(self, model=None, unique_id=0, state=()):
super().__init__(model=model, unique_id=unique_id, state=state)
self.information_spread_intensity = model.environment_params[
"information_spread_intensity"
]
self.terrorist_additional_influence = model.environment_params[
"terrorist_additional_influence"
]
self.prob_interaction = model.environment_params["prob_interaction"]
if self["id"] == self.civilian.id: # Civilian
self.mean_belief = self.random.uniform(0.00, 0.5)
elif self["id"] == self.terrorist.id: # Terrorist
self.mean_belief = self.random.uniform(0.8, 1.00)
elif self["id"] == self.leader.id: # Leader
self.mean_belief = 1.00
else:
raise Exception("Invalid state id: {}".format(self["id"]))
if "min_vulnerability" in model.environment_params:
self.vulnerability = self.random.uniform(
model.environment_params["min_vulnerability"],
model.environment_params["max_vulnerability"],
)
else:
self.vulnerability = self.random.uniform(
0, model.environment_params["max_vulnerability"]
)
@state
def civilian(self):
neighbours = list(self.get_neighboring_agents(agent_class=TerroristSpreadModel))
if len(neighbours) > 0:
# Only interact with some of the neighbors
interactions = list(
n for n in neighbours if self.random.random() <= self.prob_interaction
)
influence = sum(self.degree(i) for i in interactions)
mean_belief = sum(
i.mean_belief * self.degree(i) / influence for i in interactions
)
mean_belief = (
mean_belief * self.information_spread_intensity
+ self.mean_belief * (1 - self.information_spread_intensity)
)
self.mean_belief = mean_belief * self.vulnerability + self.mean_belief * (
1 - self.vulnerability
)
if self.mean_belief >= 0.8:
return self.terrorist
@state
def leader(self):
self.mean_belief = self.mean_belief ** (1 - self.terrorist_additional_influence)
for neighbour in self.get_neighboring_agents(
state_id=[self.terrorist.id, self.leader.id]
):
if self.betweenness(neighbour) > self.betweenness(self):
return self.terrorist
@state
def terrorist(self):
neighbours = self.get_agents(
state_id=[self.terrorist.id, self.leader.id],
agent_class=TerroristSpreadModel,
limit_neighbors=True,
)
if len(neighbours) > 0:
influence = sum(self.degree(n) for n in neighbours)
mean_belief = sum(
n.mean_belief * self.degree(n) / influence for n in neighbours
)
mean_belief = mean_belief * self.vulnerability + self.mean_belief * (
1 - self.vulnerability
)
self.mean_belief = self.mean_belief ** (
1 - self.terrorist_additional_influence
)
# Check if there are any leaders in the group
leaders = list(filter(lambda x: x.state.id == self.leader.id, neighbours))
if not leaders:
# Check if this is the potential leader
# Stop once it's found. Otherwise, set self as leader
for neighbour in neighbours:
if self.betweenness(self) < self.betweenness(neighbour):
return
return self.leader
def ego_search(self, steps=1, center=False, node=None, **kwargs):
"""Get a list of nodes in the ego network of *node* of radius *steps*"""
node = as_node(node if node is not None else self)
G = self.subgraph(**kwargs)
return nx.ego_graph(G, node, center=center, radius=steps).nodes()
def degree(self, node, force=False):
node = as_node(node)
if (
force
or (not hasattr(self.model, "_degree"))
or getattr(self.model, "_last_step", 0) < self.now
):
self.model._degree = nx.degree_centrality(self.G)
self.model._last_step = self.now
return self.model._degree[node]
def betweenness(self, node, force=False):
node = as_node(node)
if (
force
or (not hasattr(self.model, "_betweenness"))
or getattr(self.model, "_last_step", 0) < self.now
):
self.model._betweenness = nx.betweenness_centrality(self.G)
self.model._last_step = self.now
return self.model._betweenness[node]
class TrainingAreaModel(FSM, Geo):
"""
Settings:
training_influence
min_vulnerability
Requires TerroristSpreadModel.
"""
def __init__(self, model=None, unique_id=0, state=()):
super().__init__(model=model, unique_id=unique_id, state=state)
self.training_influence = model.environment_params["training_influence"]
if "min_vulnerability" in model.environment_params:
self.min_vulnerability = model.environment_params["min_vulnerability"]
else:
self.min_vulnerability = 0
@default_state
@state
def terrorist(self):
for neighbour in self.get_neighboring_agents(agent_class=TerroristSpreadModel):
if neighbour.vulnerability > self.min_vulnerability:
neighbour.vulnerability = neighbour.vulnerability ** (
1 - self.training_influence
)
class HavenModel(FSM, Geo):
"""
Settings:
haven_influence
min_vulnerability
max_vulnerability
Requires TerroristSpreadModel.
"""
def __init__(self, model=None, unique_id=0, state=()):
super().__init__(model=model, unique_id=unique_id, state=state)
self.haven_influence = model.environment_params["haven_influence"]
if "min_vulnerability" in model.environment_params:
self.min_vulnerability = model.environment_params["min_vulnerability"]
else:
self.min_vulnerability = 0
self.max_vulnerability = model.environment_params["max_vulnerability"]
def get_occupants(self, **kwargs):
return self.get_neighboring_agents(agent_class=TerroristSpreadModel, **kwargs)
@state
def civilian(self):
civilians = self.get_occupants(state_id=self.civilian.id)
if not civilians:
return self.terrorist
for neighbour in self.get_occupants():
if neighbour.vulnerability > self.min_vulnerability:
neighbour.vulnerability = neighbour.vulnerability * (
1 - self.haven_influence
)
return self.civilian
@state
def terrorist(self):
for neighbour in self.get_occupants():
if neighbour.vulnerability < self.max_vulnerability:
neighbour.vulnerability = neighbour.vulnerability ** (
1 - self.haven_influence
)
return self.terrorist
class TerroristNetworkModel(TerroristSpreadModel):
"""
Settings:
sphere_influence
vision_range
weight_social_distance
weight_link_distance
"""
def __init__(self, model=None, unique_id=0, state=()):
super().__init__(model=model, unique_id=unique_id, state=state)
self.vision_range = model.environment_params["vision_range"]
self.sphere_influence = model.environment_params["sphere_influence"]
self.weight_social_distance = model.environment_params["weight_social_distance"]
self.weight_link_distance = model.environment_params["weight_link_distance"]
@state
def terrorist(self):
self.update_relationships()
return super().terrorist()
@state
def leader(self):
self.update_relationships()
return super().leader()
def update_relationships(self):
if self.count_neighboring_agents(state_id=self.civilian.id) == 0:
close_ups = set(
self.geo_search(
radius=self.vision_range, agent_class=TerroristNetworkModel
)
)
step_neighbours = set(
self.ego_search(
self.sphere_influence,
agent_class=TerroristNetworkModel,
center=False,
)
)
neighbours = set(
agent.id
for agent in self.get_neighboring_agents(
agent_class=TerroristNetworkModel
)
)
search = (close_ups | step_neighbours) - neighbours
for agent in self.get_agents(search):
social_distance = 1 / self.shortest_path_length(agent.id)
spatial_proximity = 1 - self.get_distance(agent.id)
prob_new_interaction = (
self.weight_social_distance * social_distance
+ self.weight_link_distance * spatial_proximity
)
if (
agent["id"] == agent.civilian.id
and self.random.random() < prob_new_interaction
):
self.add_edge(agent)
break
def get_distance(self, target):
source_x, source_y = nx.get_node_attributes(self.G, "pos")[self.id]
target_x, target_y = nx.get_node_attributes(self.G, "pos")[target]
dx = abs(source_x - target_x)
dy = abs(source_y - target_y)
return (dx**2 + dy**2) ** (1 / 2)
def shortest_path_length(self, target):
try:
return nx.shortest_path_length(self.G, self.id, target)
except nx.NetworkXNoPath:
return float("inf")

View File

@@ -0,0 +1,62 @@
name: TerroristNetworkModel_sim
max_steps: 150
num_trials: 1
model_params:
network_params:
generator: random_geometric_graph
radius: 0.2
# generator: geographical_threshold_graph
# theta: 20
n: 100
network_agents:
- agent_class: TerroristNetworkModel.TerroristNetworkModel
weight: 0.8
state:
id: civilian # Civilians
- agent_class: TerroristNetworkModel.TerroristNetworkModel
weight: 0.1
state:
id: leader # Leaders
- agent_class: TerroristNetworkModel.TrainingAreaModel
weight: 0.05
state:
id: terrorist # Terrorism
- agent_class: TerroristNetworkModel.HavenModel
weight: 0.05
state:
id: civilian # Civilian
# TerroristSpreadModel
information_spread_intensity: 0.7
terrorist_additional_influence: 0.035
max_vulnerability: 0.7
prob_interaction: 0.5
# TrainingAreaModel and HavenModel
training_influence: 0.20
haven_influence: 0.20
# TerroristNetworkModel
vision_range: 0.30
sphere_influence: 2
weight_social_distance: 0.035
weight_link_distance: 0.035
visualization_params:
# Icons downloaded from https://www.iconfinder.com/
shape_property: agent
shapes:
TrainingAreaModel: target
HavenModel: home
TerroristNetworkModel: person
colors:
- attr_id: civilian
color: '#40de40'
- attr_id: terrorist
color: red
- attr_id: leader
color: '#c16a6a'
background_image: 'map_4800x2860.jpg'
background_opacity: '0.9'
background_filter_color: 'blue'
skip_test: true # This simulation takes too long for automated tests.

View File

@@ -1,14 +1,15 @@
---
name: torvalds_example
max_time: 1
max_steps: 10
interval: 2
agent_type: CounterModel
default_state:
skill_level: 'beginner'
network_params:
path: 'torvalds.edgelist'
states:
Torvalds:
skill_level: 'God'
balkian:
skill_level: 'developer'
model_params:
agent_class: CounterModel
default_state:
skill_level: 'beginner'
network_params:
path: 'torvalds.edgelist'
states:
Torvalds:
skill_level: 'God'
balkian:
skill_level: 'developer'

File diff suppressed because it is too large Load Diff

File diff suppressed because one or more lines are too long

View File

@@ -1,596 +0,0 @@
from nxsim import BaseNetworkAgent
import numpy as np
import random
import settings
settings.init()
##############################
# Variables initialization #
##############################
def init():
global networkStatus
networkStatus = {} # Dict that will contain the status of every agent in the network
sentimentCorrelationNodeArray=[]
for x in range(0, settings.number_of_nodes):
sentimentCorrelationNodeArray.append({'id':x})
# Initialize agent states. Let's assume everyone is normal.
init_states = [{'id': 0, } for _ in range(settings.number_of_nodes)] # add keys as as necessary, but "id" must always refer to that state category
####################
# Available models #
####################
class BaseBehaviour(BaseNetworkAgent):
def __init__(self, environment=None, agent_id=0, state=()):
super().__init__(environment=environment, agent_id=agent_id, state=state)
self._attrs = {}
@property
def attrs(self):
now = self.env.now
if now not in self._attrs:
self._attrs[now] = {}
return self._attrs[now]
@attrs.setter
def attrs(self, value):
self._attrs[self.env.now] = value
def run(self):
while True:
self.step(self.env.now)
yield self.env.timeout(settings.timeout)
def step(self, now):
networkStatus['agent_%s'% self.id] = self.to_json()
def to_json(self):
final = {}
for stamp, attrs in self._attrs.items():
for a in attrs:
if a not in final:
final[a] = {}
final[a][stamp] = attrs[a]
return final
class ControlModelM2(BaseBehaviour):
#Init infected
init_states[random.randint(0,settings.number_of_nodes-1)] = {'id':1}
init_states[random.randint(0,settings.number_of_nodes-1)] = {'id':1}
# Init beacons
init_states[random.randint(0, settings.number_of_nodes-1)] = {'id': 4}
init_states[random.randint(0, settings.number_of_nodes-1)] = {'id': 4}
def __init__(self, environment=None, agent_id=0, state=()):
super().__init__(environment=environment, agent_id=agent_id, state=state)
self.prob_neutral_making_denier = np.random.normal(settings.prob_neutral_making_denier, settings.standard_variance)
self.prob_infect = np.random.normal(settings.prob_infect, settings.standard_variance)
self.prob_cured_healing_infected = np.random.normal(settings.prob_cured_healing_infected, settings.standard_variance)
self.prob_cured_vaccinate_neutral = np.random.normal(settings.prob_cured_vaccinate_neutral, settings.standard_variance)
self.prob_vaccinated_healing_infected = np.random.normal(settings.prob_vaccinated_healing_infected, settings.standard_variance)
self.prob_vaccinated_vaccinate_neutral = np.random.normal(settings.prob_vaccinated_vaccinate_neutral, settings.standard_variance)
self.prob_generate_anti_rumor = np.random.normal(settings.prob_generate_anti_rumor, settings.standard_variance)
def step(self, now):
if self.state['id'] == 0: #Neutral
self.neutral_behaviour()
elif self.state['id'] == 1: #Infected
self.infected_behaviour()
elif self.state['id'] == 2: #Cured
self.cured_behaviour()
elif self.state['id'] == 3: #Vaccinated
self.vaccinated_behaviour()
elif self.state['id'] == 4: #Beacon-off
self.beacon_off_behaviour()
elif self.state['id'] == 5: #Beacon-on
self.beacon_on_behaviour()
self.attrs['status'] = self.state['id']
super().step(now)
def neutral_behaviour(self):
# Infected
infected_neighbors = self.get_neighboring_agents(state_id=1)
if len(infected_neighbors)>0:
if random.random() < self.prob_neutral_making_denier:
self.state['id'] = 3 # Vaccinated making denier
def infected_behaviour(self):
# Neutral
neutral_neighbors = self.get_neighboring_agents(state_id=0)
for neighbor in neutral_neighbors:
if random.random() < self.prob_infect:
neighbor.state['id'] = 1 # Infected
def cured_behaviour(self):
# Vaccinate
neutral_neighbors = self.get_neighboring_agents(state_id=0)
for neighbor in neutral_neighbors:
if random.random() < self.prob_cured_vaccinate_neutral:
neighbor.state['id'] = 3 # Vaccinated
# Cure
infected_neighbors = self.get_neighboring_agents(state_id=1)
for neighbor in infected_neighbors:
if random.random() < self.prob_cured_healing_infected:
neighbor.state['id'] = 2 # Cured
def vaccinated_behaviour(self):
# Cure
infected_neighbors = self.get_neighboring_agents(state_id=1)
for neighbor in infected_neighbors:
if random.random() < self.prob_cured_healing_infected:
neighbor.state['id'] = 2 # Cured
# Vaccinate
neutral_neighbors = self.get_neighboring_agents(state_id=0)
for neighbor in neutral_neighbors:
if random.random() < self.prob_cured_vaccinate_neutral:
neighbor.state['id'] = 3 # Vaccinated
# Generate anti-rumor
infected_neighbors_2 = self.get_neighboring_agents(state_id=1)
for neighbor in infected_neighbors_2:
if random.random() < self.prob_generate_anti_rumor:
neighbor.state['id'] = 2 # Cured
def beacon_off_behaviour(self):
infected_neighbors = self.get_neighboring_agents(state_id=1)
if len(infected_neighbors) > 0:
self.state['id'] == 5 #Beacon on
def beacon_on_behaviour(self):
# Cure (M2 feature added)
infected_neighbors = self.get_neighboring_agents(state_id=1)
for neighbor in infected_neighbors:
if random.random() < self.prob_generate_anti_rumor:
neighbor.state['id'] = 2 # Cured
neutral_neighbors_infected = neighbor.get_neighboring_agents(state_id=0)
for neighbor in neutral_neighbors_infected:
if random.random() < self.prob_generate_anti_rumor:
neighbor.state['id'] = 3 # Vaccinated
infected_neighbors_infected = neighbor.get_neighboring_agents(state_id=1)
for neighbor in infected_neighbors_infected:
if random.random() < self.prob_generate_anti_rumor:
neighbor.state['id'] = 2 # Cured
# Vaccinate
neutral_neighbors = self.get_neighboring_agents(state_id=0)
for neighbor in neutral_neighbors:
if random.random() < self.prob_cured_vaccinate_neutral:
neighbor.state['id'] = 3 # Vaccinated
class SpreadModelM2(BaseBehaviour):
init_states[random.randint(0,settings.number_of_nodes)] = {'id':1}
init_states[random.randint(0,settings.number_of_nodes)] = {'id':1}
def __init__(self, environment=None, agent_id=0, state=()):
super().__init__(environment=environment, agent_id=agent_id, state=state)
self.prob_neutral_making_denier = np.random.normal(settings.prob_neutral_making_denier, settings.standard_variance)
self.prob_infect = np.random.normal(settings.prob_infect, settings.standard_variance)
self.prob_cured_healing_infected = np.random.normal(settings.prob_cured_healing_infected, settings.standard_variance)
self.prob_cured_vaccinate_neutral = np.random.normal(settings.prob_cured_vaccinate_neutral, settings.standard_variance)
self.prob_vaccinated_healing_infected = np.random.normal(settings.prob_vaccinated_healing_infected, settings.standard_variance)
self.prob_vaccinated_vaccinate_neutral = np.random.normal(settings.prob_vaccinated_vaccinate_neutral, settings.standard_variance)
self.prob_generate_anti_rumor = np.random.normal(settings.prob_generate_anti_rumor, settings.standard_variance)
def step(self, now):
if self.state['id'] == 0: #Neutral
self.neutral_behaviour()
elif self.state['id'] == 1: #Infected
self.infected_behaviour()
elif self.state['id'] == 2: #Cured
self.cured_behaviour()
elif self.state['id'] == 3: #Vaccinated
self.vaccinated_behaviour()
self.attrs['status'] = self.state['id']
super().step(now)
def neutral_behaviour(self):
# Infected
infected_neighbors = self.get_neighboring_agents(state_id=1)
if len(infected_neighbors)>0:
if random.random() < self.prob_neutral_making_denier:
self.state['id'] = 3 # Vaccinated making denier
def infected_behaviour(self):
# Neutral
neutral_neighbors = self.get_neighboring_agents(state_id=0)
for neighbor in neutral_neighbors:
if random.random() < self.prob_infect:
neighbor.state['id'] = 1 # Infected
def cured_behaviour(self):
# Vaccinate
neutral_neighbors = self.get_neighboring_agents(state_id=0)
for neighbor in neutral_neighbors:
if random.random() < self.prob_cured_vaccinate_neutral:
neighbor.state['id'] = 3 # Vaccinated
# Cure
infected_neighbors = self.get_neighboring_agents(state_id=1)
for neighbor in infected_neighbors:
if random.random() < self.prob_cured_healing_infected:
neighbor.state['id'] = 2 # Cured
def vaccinated_behaviour(self):
# Cure
infected_neighbors = self.get_neighboring_agents(state_id=1)
for neighbor in infected_neighbors:
if random.random() < self.prob_cured_healing_infected:
neighbor.state['id'] = 2 # Cured
# Vaccinate
neutral_neighbors = self.get_neighboring_agents(state_id=0)
for neighbor in neutral_neighbors:
if random.random() < self.prob_cured_vaccinate_neutral:
neighbor.state['id'] = 3 # Vaccinated
# Generate anti-rumor
infected_neighbors_2 = self.get_neighboring_agents(state_id=1)
for neighbor in infected_neighbors_2:
if random.random() < self.prob_generate_anti_rumor:
neighbor.state['id'] = 2 # Cured
class SISaModel(BaseBehaviour):
def __init__(self, environment=None, agent_id=0, state=()):
super().__init__(environment=environment, agent_id=agent_id, state=state)
self.neutral_discontent_spon_prob = np.random.normal(settings.neutral_discontent_spon_prob, settings.standard_variance)
self.neutral_discontent_infected_prob = np.random.normal(settings.neutral_discontent_infected_prob,settings.standard_variance)
self.neutral_content_spon_prob = np.random.normal(settings.neutral_content_spon_prob,settings.standard_variance)
self.neutral_content_infected_prob = np.random.normal(settings.neutral_content_infected_prob,settings.standard_variance)
self.discontent_neutral = np.random.normal(settings.discontent_neutral,settings.standard_variance)
self.discontent_content = np.random.normal(settings.discontent_content,settings.variance_d_c)
self.content_discontent = np.random.normal(settings.content_discontent,settings.variance_c_d)
self.content_neutral = np.random.normal(settings.content_neutral,settings.standard_variance)
def step(self, now):
if self.state['id'] == 0:
self.neutral_behaviour()
if self.state['id'] == 1:
self.discontent_behaviour()
if self.state['id'] == 2:
self.content_behaviour()
self.attrs['status'] = self.state['id']
super().step(now)
def neutral_behaviour(self):
#Spontaneus effects
if random.random() < self.neutral_discontent_spon_prob:
self.state['id'] = 1
if random.random() < self.neutral_content_spon_prob:
self.state['id'] = 2
#Infected
discontent_neighbors = self.get_neighboring_agents(state_id=1)
if random.random() < len(discontent_neighbors)*self.neutral_discontent_infected_prob:
self.state['id'] = 1
content_neighbors = self.get_neighboring_agents(state_id=2)
if random.random() < len(content_neighbors)*self.neutral_content_infected_prob:
self.state['id'] = 2
def discontent_behaviour(self):
#Healing
if random.random() < self.discontent_neutral:
self.state['id'] = 0
#Superinfected
content_neighbors = self.get_neighboring_agents(state_id=2)
if random.random() < len(content_neighbors)*self.discontent_content:
self.state['id'] = 2
def content_behaviour(self):
#Healing
if random.random() < self.content_neutral:
self.state['id'] = 0
#Superinfected
discontent_neighbors = self.get_neighboring_agents(state_id=1)
if random.random() < len(discontent_neighbors)*self.content_discontent:
self.state['id'] = 1
class BigMarketModel(BaseBehaviour):
def __init__(self, environment=None, agent_id=0, state=()):
super().__init__(environment=environment, agent_id=agent_id, state=state)
self.enterprises = settings.enterprises
self.type = ""
self.number_of_enterprises = len(settings.enterprises)
if self.id < self.number_of_enterprises: #Enterprises
self.state['id']=self.id
self.type="Enterprise"
self.tweet_probability = settings.tweet_probability_enterprises[self.id]
else: #normal users
self.state['id']=self.number_of_enterprises
self.type="User"
self.tweet_probability = settings.tweet_probability_users
self.tweet_relevant_probability = settings.tweet_relevant_probability
self.tweet_probability_about = settings.tweet_probability_about #List
self.sentiment_about = settings.sentiment_about #List
def step(self, now):
if(self.id < self.number_of_enterprises): # Ennterprise
self.enterpriseBehaviour()
else: # Usuario
self.userBehaviour()
for i in range(self.number_of_enterprises): # So that it never is set to 0 if there are not changes (logs)
self.attrs['sentiment_enterprise_%s'% self.enterprises[i]] = self.sentiment_about[i]
super().step(now)
def enterpriseBehaviour(self):
if random.random()< self.tweet_probability: #Tweets
aware_neighbors = self.get_neighboring_agents(state_id=self.number_of_enterprises) #Nodes neighbour users
for x in aware_neighbors:
if random.uniform(0,10) < 5:
x.sentiment_about[self.id] += 0.1 #Increments for enterprise
else:
x.sentiment_about[self.id] -= 0.1 #Decrements for enterprise
# Establecemos limites
if x.sentiment_about[self.id] > 1:
x.sentiment_about[self.id] = 1
if x.sentiment_about[self.id]< -1:
x.sentiment_about[self.id] = -1
x.attrs['sentiment_enterprise_%s'% self.enterprises[self.id]] = x.sentiment_about[self.id]
def userBehaviour(self):
if random.random() < self.tweet_probability: #Tweets
if random.random() < self.tweet_relevant_probability: #Tweets something relevant
#Tweet probability per enterprise
for i in range(self.number_of_enterprises):
random_num = random.random()
if random_num < self.tweet_probability_about[i]:
#The condition is fulfilled, sentiments are evaluated towards that enterprise
if self.sentiment_about[i] < 0:
#NEGATIVO
self.userTweets("negative",i)
elif self.sentiment_about[i] == 0:
#NEUTRO
pass
else:
#POSITIVO
self.userTweets("positive",i)
def userTweets(self,sentiment,enterprise):
aware_neighbors = self.get_neighboring_agents(state_id=self.number_of_enterprises) #Nodes neighbours users
for x in aware_neighbors:
if sentiment == "positive":
x.sentiment_about[enterprise] +=0.003
elif sentiment == "negative":
x.sentiment_about[enterprise] -=0.003
else:
pass
# Establecemos limites
if x.sentiment_about[enterprise] > 1:
x.sentiment_about[enterprise] = 1
if x.sentiment_about[enterprise] < -1:
x.sentiment_about[enterprise] = -1
x.attrs['sentiment_enterprise_%s'% self.enterprises[enterprise]] = x.sentiment_about[enterprise]
class SentimentCorrelationModel(BaseBehaviour):
def __init__(self, environment=None, agent_id=0, state=()):
super().__init__(environment=environment, agent_id=agent_id, state=state)
self.outside_effects_prob = settings.outside_effects_prob
self.anger_prob = settings.anger_prob
self.joy_prob = settings.joy_prob
self.sadness_prob = settings.sadness_prob
self.disgust_prob = settings.disgust_prob
self.time_awareness=[]
for i in range(4): #In this model we have 4 sentiments
self.time_awareness.append(0) #0-> Anger, 1-> joy, 2->sadness, 3 -> disgust
sentimentCorrelationNodeArray[self.id][self.env.now]=0
def step(self, now):
self.behaviour()
super().step(now)
def behaviour(self):
angry_neighbors_1_time_step=[]
joyful_neighbors_1_time_step=[]
sad_neighbors_1_time_step=[]
disgusted_neighbors_1_time_step=[]
angry_neighbors = self.get_neighboring_agents(state_id=1)
for x in angry_neighbors:
if x.time_awareness[0] > (self.env.now-500):
angry_neighbors_1_time_step.append(x)
num_neighbors_angry = len(angry_neighbors_1_time_step)
joyful_neighbors = self.get_neighboring_agents(state_id=2)
for x in joyful_neighbors:
if x.time_awareness[1] > (self.env.now-500):
joyful_neighbors_1_time_step.append(x)
num_neighbors_joyful = len(joyful_neighbors_1_time_step)
sad_neighbors = self.get_neighboring_agents(state_id=3)
for x in sad_neighbors:
if x.time_awareness[2] > (self.env.now-500):
sad_neighbors_1_time_step.append(x)
num_neighbors_sad = len(sad_neighbors_1_time_step)
disgusted_neighbors = self.get_neighboring_agents(state_id=4)
for x in disgusted_neighbors:
if x.time_awareness[3] > (self.env.now-500):
disgusted_neighbors_1_time_step.append(x)
num_neighbors_disgusted = len(disgusted_neighbors_1_time_step)
anger_prob= settings.anger_prob+(len(angry_neighbors_1_time_step)*settings.anger_prob)
joy_prob= settings.joy_prob+(len(joyful_neighbors_1_time_step)*settings.joy_prob)
sadness_prob = settings.sadness_prob+(len(sad_neighbors_1_time_step)*settings.sadness_prob)
disgust_prob = settings.disgust_prob+(len(disgusted_neighbors_1_time_step)*settings.disgust_prob)
outside_effects_prob= settings.outside_effects_prob
num = random.random()
if(num<outside_effects_prob):
self.state['id'] = random.randint(1,4)
sentimentCorrelationNodeArray[self.id][self.env.now]=self.state['id'] #It is stored when it has been infected for the dynamic network
self.time_awareness[self.state['id']-1] = self.env.now
self.attrs['sentiment'] = self.state['id']
if(num<anger_prob):
self.state['id'] = 1
sentimentCorrelationNodeArray[self.id][self.env.now]=1
self.time_awareness[self.state['id']-1] = self.env.now
elif (num<joy_prob+anger_prob and num>anger_prob):
self.state['id'] = 2
sentimentCorrelationNodeArray[self.id][self.env.now]=2
self.time_awareness[self.state['id']-1] = self.env.now
elif (num<sadness_prob+anger_prob+joy_prob and num>joy_prob+anger_prob):
self.state['id'] = 3
sentimentCorrelationNodeArray[self.id][self.env.now]=3
self.time_awareness[self.state['id']-1] = self.env.now
elif (num<disgust_prob+sadness_prob+anger_prob+joy_prob and num>sadness_prob+anger_prob+joy_prob):
self.state['id'] = 4
sentimentCorrelationNodeArray[self.id][self.env.now]=4
self.time_awareness[self.state['id']-1] = self.env.now
self.attrs['sentiment'] = self.state['id']
class BassModel(BaseBehaviour):
def __init__(self, environment=None, agent_id=0, state=()):
super().__init__(environment=environment, agent_id=agent_id, state=state)
self.innovation_prob = settings.innovation_prob
self.imitation_prob = settings.imitation_prob
sentimentCorrelationNodeArray[self.id][self.env.now]=0
def step(self, now):
self.behaviour()
super().step(now)
def behaviour(self):
#Outside effects
if random.random() < settings.innovation_prob:
if self.state['id'] == 0:
self.state['id'] = 1
sentimentCorrelationNodeArray[self.id][self.env.now]=1
else:
pass
self.attrs['status'] = self.state['id']
return
#Imitation effects
if self.state['id'] == 0:
aware_neighbors = self.get_neighboring_agents(state_id=1)
num_neighbors_aware = len(aware_neighbors)
if random.random() < (settings.imitation_prob*num_neighbors_aware):
self.state['id'] = 1
sentimentCorrelationNodeArray[self.id][self.env.now]=1
else:
pass
self.attrs['status'] = self.state['id']
class IndependentCascadeModel(BaseBehaviour):
def __init__(self, environment=None, agent_id=0, state=()):
super().__init__(environment=environment, agent_id=agent_id, state=state)
self.innovation_prob = settings.innovation_prob
self.imitation_prob = settings.imitation_prob
self.time_awareness = 0
sentimentCorrelationNodeArray[self.id][self.env.now]=0
def step(self,now):
self.behaviour()
super().step(now)
def behaviour(self):
aware_neighbors_1_time_step=[]
#Outside effects
if random.random() < settings.innovation_prob:
if self.state['id'] == 0:
self.state['id'] = 1
sentimentCorrelationNodeArray[self.id][self.env.now]=1
self.time_awareness = self.env.now #To know when they have been infected
else:
pass
self.attrs['status'] = self.state['id']
return
#Imitation effects
if self.state['id'] == 0:
aware_neighbors = self.get_neighboring_agents(state_id=1)
for x in aware_neighbors:
if x.time_awareness == (self.env.now-1):
aware_neighbors_1_time_step.append(x)
num_neighbors_aware = len(aware_neighbors_1_time_step)
if random.random() < (settings.imitation_prob*num_neighbors_aware):
self.state['id'] = 1
sentimentCorrelationNodeArray[self.id][self.env.now]=1
else:
pass
self.attrs['status'] = self.state['id']
return

File diff suppressed because one or more lines are too long

View File

@@ -1,6 +1,10 @@
nxsim
simpy
networkx
networkx>=2.5
numpy
matplotlib
pyyaml
pyyaml>=5.1
pandas>=1
SALib>=1.3
Jinja2
Mesa>=1.1
pydantic>=1.9
sqlalchemy>=1.4

4
setup.cfg Normal file
View File

@@ -0,0 +1,4 @@
[aliases]
test=pytest
[tool:pytest]
addopts = --verbose

View File

@@ -1,20 +1,27 @@
import pip
import os
from setuptools import setup
# parse_requirements() returns generator of pip.req.InstallRequirement objects
from pip.req import parse_requirements
from soil import __version__
try:
install_reqs = parse_requirements(
"requirements.txt", session=pip.download.PipSession())
test_reqs = parse_requirements(
"test-requirements.txt", session=pip.download.PipSession())
except AttributeError:
install_reqs = parse_requirements("requirements.txt")
test_reqs = parse_requirements("test-requirements.txt")
install_reqs = [str(ir.req) for ir in install_reqs]
test_reqs = [str(ir.req) for ir in test_reqs]
with open(os.path.join('soil', 'VERSION')) as f:
__version__ = f.readlines()[0].strip()
assert __version__
def parse_requirements(filename):
""" load requirements from a pip requirements file """
with open(filename, 'r') as f:
lineiter = list(line.strip() for line in f)
return [line for line in lineiter if line and not line.startswith("#")]
install_reqs = parse_requirements("requirements.txt")
test_reqs = parse_requirements("test-requirements.txt")
extras_require={
'mesa': ['mesa>=0.8.9'],
'geo': ['scipy>=1.3'],
'web': ['tornado']
}
extras_require['all'] = [dep for package in extras_require.values() for dep in package]
setup(
@@ -28,12 +35,24 @@ setup(
download_url='https://github.com/gsi-upm/soil/archive/{}.tar.gz'.format(
__version__),
keywords=['agent', 'social', 'simulator'],
classifiers=[],
classifiers=[
'Development Status :: 5 - Production/Stable',
'Environment :: Console',
'Intended Audience :: End Users/Desktop',
'Intended Audience :: Developers',
'License :: OSI Approved :: Apache Software License',
'Operating System :: MacOS :: MacOS X',
'Operating System :: Microsoft :: Windows',
'Operating System :: POSIX',
'Programming Language :: Python :: 3'],
install_requires=install_reqs,
extras_require=extras_require,
tests_require=test_reqs,
setup_requires=['pytest-runner', ],
pytest_plugins = ['pytest_profiling'],
include_package_data=True,
entry_points={
'console_scripts':
['soil = soil.__init__:main']
['soil = soil.__main__:main',
'soil-web = soil.web.__init__:main']
})

Some files were not shown because too many files have changed in this diff Show More