1
0
mirror of https://github.com/gsi-upm/soil synced 2025-09-13 19:52:20 +00:00

Compare commits

...

84 Commits
0.13.0 ... mesa

Author SHA1 Message Date
J. Fernando Sánchez
50bca88362 Fix pre-release version of v1.0.0rc1 2023-04-20 18:07:42 +02:00
J. Fernando Sánchez
cc238d84ec Pre-release version of v1.0 2023-04-20 17:57:18 +02:00
J. Fernando Sánchez
be65592055 Default parameters terroristnetwork 2023-04-14 20:25:16 +02:00
J. Fernando Sánchez
1d882dcff6 Update easy function 2023-04-14 20:21:34 +02:00
J. Fernando Sánchez
b3e77cbff5 Update python version in gitlab-ci 2023-04-14 20:07:16 +02:00
J. Fernando Sánchez
05748a3250 Update python version requirement 2023-04-14 20:03:47 +02:00
J. Fernando Sánchez
a3fc6a5efa Update README 2023-04-14 19:56:44 +02:00
J. Fernando Sánchez
4e95709188 Update README 2023-04-14 19:53:31 +02:00
J. Fernando Sánchez
feab0ba79e Large set of changes for v0.30
The examples weren't being properly tested in the last commit. When we fixed
that a lot of bugs in the new implementation of environment and agent were
found, which accounts for most of these changes.

The main difference is the mechanism to load simulations from a configuration
file. For that to work, we had to rework our module loading code in
`serialization` and add a `source_file` attribute to configurations (and
simulations, for that matter).
2023-04-14 19:41:24 +02:00
J. Fernando Sánchez
73282530fd Big refactor v0.30
All test pass, except for the TestConfig suite, which is not too critical as the
plan for this version onwards is to avoid configuration as much as possible.
2023-04-09 04:19:24 +02:00
J. Fernando Sánchez
2869b1e1e6 Clean-up
* Removed old/unnecessary models
* Added a `simulation.{iter_}from_py` method to load simulations from python
files
* Changed tests of examples to run programmatic simulations
* Fixed programmatic examples
2022-11-13 20:31:05 +01:00
J. Fernando Sánchez
d3cee18635 Add seed to cars example 2022-10-20 14:47:28 +02:00
J. Fernando Sánchez
9a7b62e88e Release 0.30.0rc3 2022-10-20 14:12:34 +02:00
J. Fernando Sánchez
c09e480d37 black formatting 2022-10-20 14:12:10 +02:00
J. Fernando Sánchez
b2d48cb4df Add test cases for 'ASK' 2022-10-20 14:10:34 +02:00
J. Fernando Sánchez
a1262edd2a Refactored time
Treating time and conditions as the same entity was getting confusing, and it
added a lot of unnecessary abstraction in a critical part (the scheduler).

The scheduling queue now has the time as a floating number (faster), the agent
id (for ties) and the condition, as well as the agent. The first three
elements (time, id, condition) can be considered as the "key" for the event.

To allow for agent execution to be "randomized" within every step, a new
parameter has been added to the scheduler, which makes it add a random number to
the key in order to change the ordering.

`EventedAgent.received` now checks the messages before returning control to the
user by default.
2022-10-20 12:15:25 +02:00
J. Fernando Sánchez
cbbaf73538 Fix bug EventedEnvironment 2022-10-20 12:07:56 +02:00
J. Fernando Sánchez
2f5e5d0a74 Black formatting 2022-10-18 17:03:40 +02:00
J. Fernando Sánchez
a2fb25c160 Version 0.30.0rc2
* Fix CLI arguments not being used when easy is passed a simulation instance
* Docs for `examples/events_and_messages/cars.py`
2022-10-18 17:02:12 +02:00
J. Fernando Sánchez
5fcf610108 Version 0.30.0rc1 2022-10-18 15:02:05 +02:00
J. Fernando Sánchez
159c9a9077 Add events 2022-10-18 13:11:01 +02:00
J. Fernando Sánchez
3776c4e5c5 Refactor
* Removed references to `set_state`
* Split some functionality from `agents` into separate files (`fsm` and
`network_agents`)
* Rename `neighboring_agents` to `neighbors`
* Delete some spurious functions
2022-10-17 21:49:31 +02:00
J. Fernando Sánchez
880a9f2a1c black formatting 2022-10-17 20:23:57 +02:00
J. Fernando Sánchez
227fdf050e Fix conditionals 2022-10-17 19:29:39 +02:00
J. Fernando Sánchez
5d759d0072 Add conditional time values 2022-10-17 13:58:14 +02:00
J. Fernando Sánchez
77d08fc592 Agent step can be a generator 2022-10-17 08:58:51 +02:00
J. Fernando Sánchez
0efcd24d90 Improve exporters 2022-10-16 21:57:30 +02:00
J. Fernando Sánchez
78833a9e08 Formatted with black 2022-10-16 17:58:19 +02:00
J. Fernando Sánchez
d9947c2c52 WIP: all tests pass
Documentation needs some improvement

The API has been simplified to only allow for ONE topology per
NetworkEnvironment.
This covers the main use case, and simplifies the code.
2022-10-16 17:56:23 +02:00
J. Fernando Sánchez
cd62c23cb9 WIP: all tests pass 2022-10-13 22:43:16 +02:00
J. Fernando Sánchez
f811ee18c5 WIP 2022-10-06 15:49:19 +02:00
J. Fernando Sánchez
0a9c6d8b19 WIP: removed stats 2022-09-16 18:14:16 +02:00
J. Fernando Sánchez
3dc56892c1 WIP: working config 2022-09-15 19:27:17 +02:00
J. Fernando Sánchez
e41dc3dae2 WIP 2022-09-13 18:16:31 +02:00
J. Fernando Sánchez
bbaed636a8 WIP 2022-07-19 17:18:02 +02:00
J. Fernando Sánchez
6f7481769e WIP 2022-07-19 17:17:23 +02:00
J. Fernando Sánchez
1a8313e4f6 WIP 2022-07-19 17:12:41 +02:00
J. Fernando Sánchez
a40aa55b6a Release 0.20.7 2022-07-06 09:23:46 +02:00
J. Fernando Sánchez
50cba751a6 Release 0.20.6 2022-07-05 12:08:34 +02:00
J. Fernando Sánchez
dfb6d13649 version 0.20.5 2022-05-18 16:13:53 +02:00
J. Fernando Sánchez
5559d37e57 version 0.20.4 2022-05-18 15:20:58 +02:00
J. Fernando Sánchez
2116fe6f38 Bug fixes and minor improvements 2022-05-12 16:14:47 +02:00
J. Fernando Sánchez
affeeb9643 Update examples 2022-04-04 16:47:58 +02:00
J. Fernando Sánchez
42ddc02318 CI: delay PyPI check 2022-03-07 14:35:07 +01:00
J. Fernando Sánchez
cab9a3440b Fix typo CI/CD 2022-03-07 13:57:25 +01:00
J. Fernando Sánchez
db505da49c Minor CI change 2022-03-07 13:35:02 +01:00
J. Fernando Sánchez
8eb8eb16eb Minor CI change 2022-03-07 12:51:22 +01:00
J. Fernando Sánchez
3fc5ca8c08 Fix requirements issue CI/CD 2022-03-07 12:46:01 +01:00
J. Fernando Sánchez
c02e6ea2e8 Fix die bug 2022-03-07 11:17:27 +01:00
J. Fernando Sánchez
38f8a8d110 Merge branch 'mesa'
First iteration to achieve MESA compatibility.
As a side effect, we have removed `simpy`.

For a full list of changes, see `CHANGELOG.md`.
2022-03-07 10:54:47 +01:00
J. Fernando Sánchez
cb72aac980 Add random activation example 2022-03-07 10:48:59 +01:00
J. Fernando Sánchez
6c4f44b4cb Partial MESA compatibility and several fixes
Documentation for the new APIs is still a work in progress :)
2021-10-15 20:16:49 +02:00
J. Fernando Sánchez
af9a392a93 WIP: mesa compat
All tests pass but some features are still missing/unclear:

- Mesa agents do not have a `state`, so their "metrics" don't get stored. I will
probably refactor this to remove some magic in this regard. This should get rid
of the `_state` dictionary and the setitem/getitem magic.
- Simulation is still different from a runner. So far only Agent and
Environment/Model have been updated.
2021-10-15 13:36:39 +02:00
J. Fernando Sánchez
5d7e57675a WIP: mesa compatibility 2021-10-14 17:37:06 +02:00
J. Fernando Sánchez
e860bdb922 v0.15.2
See CHANGELOG.md for a complete list of changes
2021-05-22 16:33:52 +02:00
J. Fernando Sánchez
d6b684c1c1 Fix docs requirements 2021-05-22 16:08:38 +02:00
J. Fernando Sánchez
05f7f49233 Refactoring v0.15.1
See CHANGELOG.md for a full list of changes

* Removed nxsim
* Refactored `agents.NetworkAgent` and `agents.BaseAgent`
* Refactored exporters
* Added stats to history
2020-11-19 23:58:47 +01:00
J. Fernando Sánchez
3b2c6a3db5 Seed before env initialization
Fixes #6
2020-07-27 12:29:24 +02:00
J. Fernando Sánchez
6118f917ee Fix Windows bug
Update URLs to gsi.upm.es
2020-07-07 10:57:10 +02:00
J. Fernando Sánchez
6adc8d36ba minor change in docs 2020-03-13 12:50:05 +01:00
J. Fernando Sánchez
c8b8149a17 Updated to 0.14.6
Fix compatibility issues with newer networkx and pandas versions
2020-03-11 16:17:14 +01:00
J. Fernando Sánchez
6690b6ee5f Fix incompatibility and bug in get_agents 2019-05-16 19:59:46 +02:00
J. Fernando Sánchez
97835b3d10 Clean up exporters 2019-05-03 13:17:27 +02:00
J. Fernando Sánchez
b0add8552e Tag version 0.14.0 2019-04-30 16:26:08 +02:00
J. Fernando Sánchez
1cf85ea450 Avoid writing gexf in test 2019-04-30 16:16:46 +02:00
J. Fernando Sánchez
c32e167fb8 Bump pyyaml to 5.1 2019-04-30 16:04:12 +02:00
J. Fernando Sánchez
5f68b5321d Pinning scipy to 1.2.1
1.3.0rc1 is not compatible with salib
2019-04-30 15:52:37 +02:00
J. Fernando Sánchez
2a2843bd19 Add tests exporters 2019-04-30 09:28:53 +02:00
J. Fernando Sánchez
d1006bd55c WIP: exporters 2019-04-29 18:47:15 +02:00
J. Fernando Sánchez
9bc036d185 WIP: exporters 2019-04-26 19:22:45 +02:00
J. Fernando Sánchez
a3ea434f23 0.13.8 2019-02-19 21:17:19 +01:00
J. Fernando Sánchez
65f6aa72f3 fix timeout in FSM. Improve logs 2019-02-01 19:05:07 +01:00
J. Fernando Sánchez
09e14c6e84 Add generator and programmatic examples 2018-12-20 19:25:33 +01:00
J. Fernando Sánchez
8593ac999d Swap test and build in CI. Remove tests in tags 2018-12-20 17:56:33 +01:00
J. Fernando Sánchez
90338c3549 skip-tls-verify in kaniko 2018-12-20 17:48:58 +01:00
J. Fernando Sánchez
1d532dacfe Remove entrypoint build stage 2018-12-20 15:14:58 +01:00
J. Fernando Sánchez
a1f8d8c9c5 Change entrypoint build stage 2018-12-20 15:07:45 +01:00
J. Fernando Sánchez
de326eb331 Remove CI global image 2018-12-20 15:05:45 +01:00
J. Fernando Sánchez
04b4380c61 Fix wrong import soil.web 2018-12-20 14:06:18 +01:00
J. Fernando Sánchez
d70a0c865c limit ci jobs to docker runners 2018-12-09 17:22:40 +01:00
J. Fernando Sánchez
625c28e4ee Fix CI syntax 2018-12-09 17:09:31 +01:00
J. Fernando Sánchez
9749f4ca14 Fix multithreading
Multithreading needs pickling to work.
Pickling/unpickling didn't work in some situations, like when the
environment_agents parameter was left blank.
This was due to two reasons:

1) agents and history didn't have a setstate method, and some of their
attributes cannot be pickled (generators, sqlite connection)
2) the environment was adding generators (agents) to its state.

This fixes the situation by restricting the keys that the environment exports
when it pickles, and by adding the set/getstate methods in agents.

The resulting pickles should contain enough information to inspect
them (history, state values, etc), but very limited.
2018-12-09 16:58:49 +01:00
J. Fernando Sánchez
3526fa29d7 Fix bug parallel 2018-12-09 14:06:50 +01:00
J. Fernando Sánchez
53604c1e66 Fix quickstart.rst markdown code 2018-12-09 13:10:00 +01:00
142 changed files with 9723 additions and 88643 deletions

View File

@@ -1,2 +1,5 @@
**/soil_output
.*
**/__pycache__
__pycache__
*.pyc

1
.gitignore vendored
View File

@@ -8,3 +8,4 @@ soil_output
docs/_build*
build/*
dist/*
prof

View File

@@ -1,21 +1,53 @@
image: python:3.7
steps:
- build
stages:
- test
- publish
- check_published
build:
stage: build
docker:
stage: publish
image:
name: gcr.io/kaniko-project/executor:debug
entrypoint: [""]
tags:
- docker
script:
- echo "{\"auths\":{\"$CI_REGISTRY\":{\"username\":\"$CI_REGISTRY_USER\",\"password\":\"$CI_REGISTRY_PASSWORD\"}}}" > /kaniko/.docker/config.json
- /kaniko/executor --context $CI_PROJECT_DIR --dockerfile $CI_PROJECT_DIR/Dockerfile --destination $CI_REGISTRY_IMAGE:$CI_COMMIT_TAG
# The skip-tls-verify flag is there because our registry certificate is self signed
- /kaniko/executor --context $CI_PROJECT_DIR --skip-tls-verify --dockerfile $CI_PROJECT_DIR/Dockerfile --destination $CI_REGISTRY_IMAGE:$CI_COMMIT_TAG
only:
- tags
test:
tags:
- docker
image: python:3.8
stage: test
script:
python setup.py test
- pip install -r requirements.txt -r test-requirements.txt
- python setup.py test
push_pypi:
only:
- tags
tags:
- docker
image: python:3.8
stage: publish
script:
- echo $CI_COMMIT_TAG > soil/VERSION
- pip install twine
- python setup.py sdist bdist_wheel
- TWINE_PASSWORD=$PYPI_PASSWORD TWINE_USERNAME=$PYPI_USERNAME python -m twine upload dist/*
check_pypi:
only:
- tags
tags:
- docker
image: python:3.8
stage: check_published
script:
- pip install soil==$CI_COMMIT_TAG
# Allow PYPI to update its index before we try to install
when: delayed
start_in: 2 minutes

189
CHANGELOG.md Normal file
View File

@@ -0,0 +1,189 @@
# Changelog
All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## [1.0 UNRELEASED]
Version 1.0 introduced multiple changes, especially on the `Simulation` class and anything related to how configuration is handled.
For an explanation of the general changes in version 1.0, please refer to the file `docs/notes_v1.0.rst`.
### Added
* A modular set of classes for environments/models. Now the ability to configure the agents through an agent definition and a topology through a network configuration is split into two classes (`soil.agents.BaseEnvironment` for agents, `soil.agents.NetworkEnvironment` to add topology).
* Environments now have a class method to make them easier to use without a simulation`.run`. Notice that this is different from `run_model`, which is an instance method.
* Ability to run simulations using mesa models
* The `soil.exporters` module to export the results of datacollectors (`model.datacollector`) into files at the end of trials/simulations
* Agents can now have generators as a step function or a state. They work similar to normal functions, with one caveat in the case of `FSM`: only `time` values (or None) can be yielded, not a state. This is because the state will not change, it will be resumed after the yield, at the appropriate time. The return value *can* be a state, or a `(state, time)` tuple, just like in normal states.
* Simulations can now specify a `matrix` with possible values for every simulation parameter. The final parameters will be calculated based on the `parameters` used and a cartesian product (i.e., all possible combinations) of each parameter.
* Simple debugging capabilities in `soil.debugging`, with a custom `pdb.Debugger` subclass that exposes commands to list agents and their status and set breakpoints on states (for FSM agents). Try it with `soil --debug <simulation file>`
### Changed
* Configuration schema (`Simulation`) is very simplified. All simulations should be checked
* Model / environment variables are expected (but not enforced) to be a single value. This is done to more closely align with mesa
* `Exporter.iteration_end` now takes two parameters: `env` (same as before) and `params` (specific parameters for this environment). We considered including a `parameters` attribute in the environment, but this would not be compatible with mesa.
* `num_trials` renamed to `iterations`
* General renaming of `trial` to `iteration`, to work better with `mesa`
* `model_parameters` renamed to `parameters` in simulation
* Simulation results for every iteration of a simulation with the same name are stored in a single `sqlite` database
### Removed
* Any `tsih` and `History` integration in the main classes. To record the state of environments/agents, just use a datacollector. In some cases this may be slower or consume more memory than the previous system. However, few cases actually used the full potential of the history, and it came at the cost of unnecessary complexity and worse performance for the majority of cases.
## [0.20.7]
### Changed
* Creating a `time.When` from another `time.When` does not nest them anymore (it returns the argument)
### Fixed
* Bug with time.NEVER/time.INFINITY
## [0.20.6]
### Fixed
* Agents now return `time.INFINITY` when dead, instead of 'inf'
* `soil.__init__` does not re-export built-in time (change in `soil.simulation`. It used to create subtle import conflicts when importing soil.time.
* Parallel simulations were broken because lambdas cannot be pickled properly, which is needed for multiprocessing.
### Changed
* Some internal simulation methods do not accept `*args` anymore, to avoid ambiguity and bugs.
## [0.20.5]
### Changed
* Defaults are now set in the agent __init__, not in the environment. This decouples both classes a bit more, and it is more intuitive
## [0.20.4]
### Added
* Agents can now be given any kwargs, which will be used to set their state
* Environments have a default logger `self.logger` and a log method, just like agents
## [0.20.3]
### Fixed
* Default state values are now deepcopied again.
* Seeds for environments only concatenate the trial id (i.e., a number), to provide repeatable results.
* `Environment.run` now calls `Environment.step`, to allow for easy overloading of the environment step
### Removed
* Datacollectors are not being used for now.
* `time.TimedActivation.step` does not use an `until` parameter anymore.
### Changed
* Simulations now run right up to `until` (open interval)
* Time instants (`time.When`) don't need to be floats anymore. Now we can avoid precision issues with big numbers by using ints.
* Rabbits simulation is more idiomatic (using subclasses)
## [0.20.2]
### Fixed
* CI/CD testing issues
## [0.20.1]
### Fixed
* Agents would run another step after dying.
## [0.20.0]
### Added
* Integration with MESA
* `not_agent_ids` parameter to get sql in history
### Changed
* `soil.Environment` now also inherits from `mesa.Model`
* `soil.Agent` now also inherits from `mesa.Agent`
* `soil.time` to replace `simpy` events, delays, duration, etc.
* `agent.id` is not `agent.unique_id` to be compatible with `mesa`. A property `BaseAgent.id` has been added for compatibility.
* `agent.environment` is now `agent.model`, for the same reason as above. The parameter name in `BaseAgent.__init__` has also been renamed.
### Removed
* `simpy` dependency and compatibility. Each agent used to be a simpy generator, but that made debugging and error handling more complex. That has been replaced by a scheduler within the `soil.Environment` class, similar to how `mesa` does it.
* `soil.history` is now a separate package named `tsih`. The keys namedtuple uses `dict_id` instead of `agent_id`.
### Added
* An option to choose whether a database should be used for history
## [0.15.2]
### Fixed
* Pass the right known_modules and parameters to stats discovery in simulation
* The configuration file must exist when launching through the CLI. If it doesn't, an error will be logged
* Minor changes in the documentation of the CLI arguments
### Changed
* Stats are now exported by default
## [0.15.1]
### Added
* read-only `History`
### Fixed
* Serialization problem with the `Environment` on parallel mode.
* Analysis functions now work as they should in the tutorial
## [0.15.0]
### Added
* Control logging level in CLI and simulation
* `Stats` to calculate trial and simulation-wide statistics
* Simulation statistics are stored in a separate table in history (see `History.get_stats` and `History.save_stats`, as well as `soil.stats`)
* Aliased `NetworkAgent.G` to `NetworkAgent.topology`.
### Changed
* Templates in config files can be given as dictionaries in addition to strings
* Samplers are used more explicitly
* Removed nxsim dependency. We had already made a lot of changes, and nxsim has not been updated in 5 years.
* Exporter methods renamed to `trial` and `end`. Added `start`.
* `Distribution` exporter now a stats class
* `global_topology` renamed to `topology`
* Moved topology-related methods to `NetworkAgent`
### Fixed
* Temporary files used for history in dry_run mode are not longer left open
## [0.14.9]
### Changed
* Seed random before environment initialization
## [0.14.8]
### Fixed
* Invalid directory names in Windows gsi-upm/soil#5
## [0.14.7]
### Changed
* Minor change to traceback handling in async simulations
### Fixed
* Incomplete example in the docs (example.yml) caused an exception
## [0.14.6]
### Fixed
* Bug with newer versions of networkx (0.24) where the Graph.node attribute has been removed. We have updated our calls, but the code in nxsim is not under our control, so we have pinned the networkx version until that issue is solved.
### Changed
* Explicit yaml.SafeLoader to avoid deprecation warnings when using yaml.load. It should not break any existing setups, but we could move to the FullLoader in the future if needed.
## [0.14.4]
### Fixed
* Bug in `agent.get_agents()` when `state_id` is passed as a string. The tests have been modified accordingly.
## [0.14.3]
### Fixed
* Incompatibility with py3.3-3.6 due to ModuleNotFoundError and TypeError in DryRunner
## [0.14.2]
### Fixed
* Output path for exporters is now soil_output
### Changed
* CSV output to stdout in dry_run mode
## [0.14.1]
### Changed
* Exporter names in lower case
* Add default exporter in runs
## [0.14.0]
### Added
* Loading configuration from template definitions in the yaml, in preparation for SALib support.
The definition of the variables and their possible values (i.e., a problem in SALib terms), as well as a sampler function, can be provided.
Soil uses this definition and the template to generate a set of configurations.
* Simulation group names, to link related simulations. For now, they are only used to group all simulations in the same group under the same folder.
* Exporters unify exporting/dumping results and other files to disk. If `dry_run` is set to `True`, exporters will write to stdout instead of a file (useful for testing/debugging).
* Distribution exporter, to write statistics about values and value_counts in every simulation. The results are dumped to two CSV files.
### Changed
* `dir_path` is now the directory for resources (modules, files)
* Environments and simulations do not export or write anything by default. That task is delegated to Exporters
### Removed
* The output dir for environments and simulations (see Exporters)
* DrawingAgent, because it wrote to disk and was not being used. We provide a partial alternative in the form of the GraphDrawing exporter. A complete alternative will be provided once the network at each state can be accessed by exporters.
## Fixed
* Modules with custom agents/environments failed to load when they were run from outside the directory of the definition file. Modules are now loaded from the directory of the simulation file in addition to the working directory
* Memory databases (in history) can now be shared between threads.
* Testing all examples, not just subdirectories
## [0.13.8]
### Changed
* Moved TerroristNetworkModel to examples
### Added
* `get_agents` and `count_agents` methods now accept lists as inputs. They can be used to retrieve agents from node ids
* `subgraph` in BaseAgent
* `agents.select` method, to filter out agents
* `skip_test` property in yaml definitions, to force skipping some examples
* `agents.Geo`, with a search function based on postition
* `BaseAgent.ego_search` to get nodes from the ego network of a node
* `BaseAgent.degree` and `BaseAgent.betweenness`
### Fixed
## [0.13.7]
### Changed
* History now defaults to not backing up! This makes it more intuitive to load the history for examination, at the expense of rewriting something. That should not happen because History is only created in the Environment, and that has `backup=True`.
### Added
* Agent names are assigned based on their agent types
* Agent logging uses the agent name.
* FSM agents can now return a timeout in addition to a new state. e.g. `return self.idle, self.env.timeout(2)` will execute the *different_state* in 2 *units of time* (`t_step=now+2`).
* Example of using timeouts in FSM (custom_timeouts)
* `network_agents` entries may include an `ids` entry. If set, it should be a list of node ids that should be assigned that agent type. This complements the previous behavior of setting agent type with `weights`.

View File

@@ -1,4 +1,7 @@
include requirements.txt
include test-requirements.txt
include README.rst
graft soil
graft soil
global-exclude __pycache__
global-exclude soil_output
global-exclude *.py[co]

View File

@@ -1,4 +1,7 @@
test:
quick-test:
docker-compose exec dev python -m pytest -s -v
.PHONY: test
test:
docker run -t -v $$PWD:/usr/src/app -w /usr/src/app python:3.7 python setup.py test
.PHONY: test

View File

@@ -1,10 +1,69 @@
# [SOIL](https://github.com/gsi-upm/soil)
Soil is an extensible and user-friendly Agent-based Social Simulator for Social Networks.
Learn how to run your own simulations with our [documentation](http://soilsim.readthedocs.io).
Follow our [tutorial](examples/tutorial/soil_tutorial.ipynb) to develop your own agent models.
> **Warning**
> Soil 1.0 introduced many fundamental changes. Check the [documention on how to update your simulations to work with newer versions](docs/notes_v1.0.rst)
## Features
* Integration with (social) networks (through `networkx`)
* Convenience functions and methods to easily assign agents to your model (and optionally to its network):
* Following a given distribution (e.g., 2 agents of type `Foo`, 10% of the network should be agents of type `Bar`)
* Based on the topology of the network
* **Several types of abstractions for agents**:
* Finite state machine, where methods can be turned into a state
* Network agents, which have convenience methods to access the model's topology
* Generator-based agents, whose state is paused though a `yield` and resumed on the next step
* **Reporting and data collection**:
* Soil models include data collection and record some data by default (# of agents, state of each agent, etc.)
* All data collected are exported by default to a SQLite database and a description file
* Options to export to other formats, such as CSV, or defining your own exporters
* A summary of the data collected is shown in the command line, for easy inspection
* **An event-based scheduler**
* Agents can be explicit about when their next time/step should be, and not all agents run in every step. This avoids unnecessary computation.
* Time intervals between each step are flexible.
* There are primitives to specify when the next execution of an agent should be (or conditions)
* **Actor-inspired** message-passing
* A simulation runner (`soil.Simulation`) that can:
* Run models in parallel
* Save results to different formats
* Simulation configuration files
* A command line interface (`soil`), to quickly run simulations with different parameters
* An integrated debugger (`soil --debug`) with custom functions to print agent states and break at specific states
## Mesa compatibility
SOIL has been redesigned to integrate well with [Mesa](https://github.com/projectmesa/mesa).
For instance, it should be possible to run a `mesa.Model` models using a `soil.Simulation` and the `soil` CLI, or to integrate the `soil.TimedActivation` scheduler on a `mesa.Model`.
Note that some combinations of `mesa` and `soil` components, while technically possible, are much less useful or might yield surprising results.
For instance, you may add any `soil.agent` agent on a regular `mesa.Model` with a vanilla scheduler from `mesa.time`.
But in that case the agents will not get any of the advanced event-based scheduling, and most agent behaviors that depend on that may not work.
## Changes in version 0.3
Version 0.3 came packed with many changes to provide much better integration with MESA.
For a long time, we tried to keep soil backwards-compatible, but it turned out to be a big endeavour and the resulting code was less readable.
This translates to harder maintenance and a worse experience for newcomers.
In the end, we decided to make some breaking changes.
If you have an older Soil simulation, you have two options:
* Update the necessary configuration files and code. You may use the examples in the `examples` folder for reference, as well as the documentation.
* Keep using a previous `soil` version.
## Citation
If you use Soil in your research, don't forget to cite this paper:
```bibtex
@@ -28,7 +87,6 @@ If you use Soil in your research, don't forget to cite this paper:
```
@Copyright GSI - Universidad Politécnica de Madrid 2017
[![SOIL](logo_gsi.png)](https://www.gsi.dit.upm.es)
@Copyright GSI - Universidad Politécnica de Madrid 2017-2021
[![SOIL](logo_gsi.png)](https://www.gsi.upm.es)

View File

@@ -31,7 +31,7 @@
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
# ones.
extensions = []
extensions = ['IPython.sphinxext.ipython_console_highlighting']
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
@@ -69,7 +69,7 @@ language = None
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
# This patterns also effect to html_static_path and html_extra_path
exclude_patterns = ['_build', 'Thumbs.db', '.DS_Store']
exclude_patterns = ['_build', 'Thumbs.db', '.DS_Store', '**.ipynb_checkpoints']
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'

View File

@@ -1,244 +0,0 @@
Configuring a simulation
------------------------
There are two ways to configure a simulation: programmatically and with a configuration file.
In both cases, the parameters used are the same.
The advantage of a configuration file is that it is a clean declarative description, and it makes it easier to reproduce.
Simulation configuration files can be formatted in ``json`` or ``yaml`` and they define all the parameters of a simulation.
Here's an example (``example.yml``).
.. code:: yaml
---
name: MyExampleSimulation
max_time: 50
num_trials: 3
interval: 2
network_params:
generator: barabasi_albert_graph
n: 100
m: 2
network_agents:
- agent_type: SISaModel
weight: 1
state:
id: content
- agent_type: SISaModel
weight: 1
state:
id: discontent
- agent_type: SISaModel
weight: 8
state:
id: neutral
environment_params:
prob_infect: 0.075
This example configuration will run three trials (``num_trials``) of a simulation containing a randomly generated network (``network_params``).
The 100 nodes in the network will be SISaModel agents (``network_agents.agent_type``), which is an agent behavior that is included in Soil.
10% of the agents (``weight=1``) will start in the content state, 10% in the discontent state, and the remaining 80% (``weight=8``) in the neutral state.
All agents will have access to the environment (``environment_params``), which only contains one variable, ``prob_infected``.
The state of the agents will be updated every 2 seconds (``interval``).
Now run the simulation with the command line tool:
.. code:: bash
soil example.yml
Once the simulation finishes, its results will be stored in a folder named ``MyExampleSimulation``.
Three types of objects are saved by default: a pickle of the simulation; a ``YAML`` representation of the simulation (which can be used to re-launch it); and for every trial, a ``sqlite`` file with the content of the state of every network node and the environment parameters at every step of the simulation.
.. code::
soil_output
└── MyExampleSimulation
├── MyExampleSimulation.dumped.yml
├── MyExampleSimulation.simulation.pickle
├── MyExampleSimulation_trial_0.db.sqlite
├── MyExampleSimulation_trial_1.db.sqlite
└── MyExampleSimulation_trial_2.db.sqlite
You may also ask soil to export the states in a ``csv`` file, and the network in gephi format (``gexf``).
Network
=======
The network topology for the simulation can be loaded from an existing network file or generated with one of the random network generation methods from networkx.
Loading a network
#################
To load an existing network, specify its path in the configuration:
.. code:: yaml
---
network_params:
path: /tmp/mynetwork.gexf
Soil will try to guess what networkx method to use to read the file based on its extension.
However, we only test using ``gexf`` files.
For simple networks, you may also include them in the configuration itself using , using the ``topology`` parameter like so:
.. code:: yaml
---
topology:
nodes:
- id: First
- id: Second
links:
- source: First
target: Second
Generating a random network
###########################
To generate a random network using one of networkx's built-in methods, specify the `graph generation algorithm <https://networkx.github.io/documentation/development/reference/generators.html>`_ and other parameters.
For example, the following configuration is equivalent to :code:`nx.complete_graph(n=100)`:
.. code:: yaml
network_params:
generator: complete_graph
n: 100
Environment
============
The environment is the place where the shared state of the simulation is stored.
For instance, the probability of disease outbreak.
The configuration file may specify the initial value of the environment parameters:
.. code:: yaml
environment_params:
daily_probability_of_earthquake: 0.001
number_of_earthquakes: 0
All agents have access to the environment parameters.
In some scenarios, it is useful to have a custom environment, to provide additional methods or to control the way agents update environment state.
For example, if our agents play the lottery, the environment could provide a method to decide whether the agent wins, instead of leaving it to the agent.
Agents
======
Agents are a way of modelling behavior.
Agents can be characterized with two variables: agent type (``agent_type``) and state.
Only one agent is executed at a time (generally, every ``interval`` seconds), and it has access to its state and the environment parameters.
Through the environment, it can access the network topology and the state of other agents.
There are three three types of agents according to how they are added to the simulation: network agents and environment agent.
Network Agents
##############
Network agents are attached to a node in the topology.
The configuration file allows you to specify how agents will be mapped to topology nodes.
The simplest way is to specify a single type of agent.
Hence, every node in the network will be associated to an agent of that type.
.. code:: yaml
agent_type: SISaModel
It is also possible to add more than one type of agent to the simulation, and to control the ratio of each type (using the ``weight`` property).
For instance, with following configuration, it is five times more likely for a node to be assigned a CounterModel type than a SISaModel type.
.. code:: yaml
network_agents:
- agent_type: SISaModel
weight: 1
- agent_type: CounterModel
weight: 5
The third option is to specify the type of agent on the node itself, e.g.:
.. code:: yaml
topology:
nodes:
- id: first
agent_type: BaseAgent
states:
first:
agent_type: SISaModel
This would also work with a randomly generated network:
.. code:: yaml
network:
generator: complete
n: 5
agent_type: BaseAgent
states:
- agent_type: SISaModel
In addition to agent type, you may add a custom initial state to the distribution.
This is very useful to add the same agent type with different states.
e.g., to populate the network with SISaModel, roughly 10% of them with a discontent state:
.. code:: yaml
network_agents:
- agent_type: SISaModel
weight: 9
state:
id: neutral
- agent_type: SISaModel
weight: 1
state:
id: discontent
Lastly, the configuration may include initial state for one or more nodes.
For instance, to add a state for the two nodes in this configuration:
.. code:: yaml
agent_type: SISaModel
network:
generator: complete_graph
n: 2
states:
- id: content
- id: discontent
Or to add state only to specific nodes (by ``id``).
For example, to apply special skills to Linux Torvalds in a simulation:
.. literalinclude:: ../examples/torvalds.yml
:language: yaml
Environment Agents
##################
In addition to network agents, more agents can be added to the simulation.
These agents are programmed in much the same way as network agents, the only difference is that they will not be assigned to network nodes.
.. code::
environment_agents:
- agent_type: MyAgent
state:
mood: happy
- agent_type: DummyAgent
You may use environment agents to model events that a normal agent cannot control, such as natural disasters or chance.
They are also useful to add behavior that has little to do with the network and the interactions within that network.

40
docs/example.yml Normal file
View File

@@ -0,0 +1,40 @@
---
name: MyExampleSimulation
max_time: 50
num_trials: 3
interval: 2
model_params:
topology:
params:
generator: barabasi_albert_graph
n: 100
m: 2
agents:
distribution:
- agent_class: SISaModel
topology: True
ratio: 0.1
state:
state_id: content
- agent_class: SISaModel
topology: True
ratio: .1
state:
state_id: discontent
- agent_class: SISaModel
topology: True
ratio: 0.8
state:
state_id: neutral
prob_infect: 0.075
neutral_discontent_spon_prob: 0.1
neutral_discontent_infected_prob: 0.3
neutral_content_spon_prob: 0.3
neutral_content_infected_prob: 0.4
discontent_neutral: 0.5
discontent_content: 0.5
variance_d_c: 0.2
content_discontent: 0.2
variance_c_d: 0.2
content_neutral: 0.2
standard_variance: 1

View File

@@ -1,12 +1,20 @@
.. Soil documentation master file, created by
sphinx-quickstart on Tue Apr 25 12:48:56 2017.
You can adapt this file completely to your liking, but it should at least
contain the root `toctree` directive.
Welcome to Soil's documentation!
================================
Soil is an Agent-based Social Simulator in Python focused on Social Networks.
Soil is an opinionated Agent-based Social Simulator in Python focused on Social Networks.
.. image:: soil.png
:width: 80%
:align: center
Soil can be installed through pip (see more details in the :doc:`installation` page):
.. code:: bash
pip install soil
To get started developing your own simulations and agent behaviors, check out our :doc:`Tutorial <soil_tutorial>` and the `examples on GitHub <https://github.com/gsi-upm/soil/tree/master/examples>.
If you use Soil in your research, do not forget to cite this paper:
@@ -38,8 +46,6 @@ If you use Soil in your research, do not forget to cite this paper:
:caption: Learn more about soil:
installation
quickstart
configuration
Tutorial <soil_tutorial>
..

View File

@@ -1,7 +1,10 @@
Installation
------------
The easiest way to install Soil is through pip, with Python >= 3.4:
Through pip
===========
The easiest way to install Soil is through pip, with Python >= 3.8:
.. code:: bash
@@ -14,11 +17,49 @@ Now test that it worked by running the command line tool
soil --help
Or using soil programmatically:
#or
python -m soil --help
Or, if you're using using soil programmatically:
.. code:: python
import soil
print(soil.__version__)
The latest version can be installed through `GitLab <https://lab.cluster.gsi.dit.upm.es/soil/soil.git>`_.
Web UI
======
Soil also includes a web server that allows you to upload your simulations, change parameters, and visualize the results, including a timeline of the network.
To make it work, you have to install soil like this:
.. code::
pip install soil[web]
Once installed, the soil web UI can be run in two ways:
.. code::
soil-web
# OR
python -m soil.web
Development
===========
The latest version can be downloaded from `GitHub <https://github.com/gsi-upm/soil>`_ and installed manually:
.. code:: bash
git clone https://github.com/gsi-upm/soil
cd soil
python -m venv .venv
source .venv/bin/activate
pip install --editable .

View File

@@ -12,7 +12,7 @@ set BUILDDIR=_build
set SPHINXPROJ=Soil
if "%1" == "" goto help
eE
%SPHINXBUILD% >NUL 2>NUL
if errorlevel 9009 (
echo.

22
docs/mesa.rst Normal file
View File

@@ -0,0 +1,22 @@
Mesa compatibility
------------------
Soil is in the process of becoming fully compatible with MESA.
The idea is to provide a set of modular classes and functions that extend the functionality of mesa, whilst staying compatible.
In the end, it should be possible to add regular mesa agents to a soil simulation, or use a soil agent within a mesa simulation/model.
This is a non-exhaustive list of tasks to achieve compatibility:
- [ ] Integrate `soil.Simulation` with mesa's runners:
- [ ] `soil.Simulation` could mimic/become a `mesa.batchrunner`
- [ ] Integrate `soil.Environment` with `mesa.Model`:
- [x] `Soil.Environment` inherits from `mesa.Model`
- [x] `Soil.Environment` includes a Mesa-like Scheduler (see the `soil.time` module.
- [ ] Allow for `mesa.Model` to be used in a simulation.
- [ ] Integrate `soil.Agent` with `mesa.Agent`:
- [x] Rename agent.id to unique_id?
- [x] mesa agents can be used in soil simulations (see `examples/mesa`)
- [ ] Provide examples
- [ ] Using mesa modules in a soil simulation
- [ ] Using soil modules in a mesa simulation
- [ ] Document the new APIs and usage

35
docs/notes_v1.0.rst Normal file
View File

@@ -0,0 +1,35 @@
What are the main changes in version 1.0?
#########################################
Version 1.0 is a major rewrite of the Soil system, focused on simplifying the API, aligning it with Mesa, and making it easier to use.
Unfortunately, this comes at the cost of backwards compatibility.
We drew several lessons from the previous version of Soil, and tried to address them in this version.
Mainly:
- The split between simulation configuration and simulation code was overly complicated for most use cases. As a result, most users ended up reusing configuration.
- Storing **all** the simulation data in a database is costly and unnecessary for most use cases. For most use cases, only a handful of variables need to be stored. This fits nicely with Mesa's data collection system.
- The API was too complex, and it was difficult to understand how to use it.
- Most parts of the API were not aligned with Mesa, which made it difficult to use Mesa's features or to integrate Soil modules with Mesa code, especially for newcomers.
- Many parts of the API were tightly coupled, which made it difficult to find bugs, test the system and add new features.
The 0.30 rewrite should provide a middle ground between Soil's opinionated approach and Mesa's flexibility.
The new Soil is less configuration-centric.
It aims to provide more modular and convenient functions, most of which can be used in vanilla Mesa.
How are agents assigned to nodes in the network
###############################################
The constructor of the `NetworkAgent` class has two arguments: `node_id` and `topology`.
If `topology` is not provided, it will default to `self.model.topology`.
This assignment might err if the model does not have a `topology` attribute, but most Soil environments derive from `NetworkEnvironment`, so they include a topology by default.
If `node_id` is not provided, a random node will be selected from the topology, until a node with no agent is found.
Then, the `node_id` of that node is assigned to the agent.
If no node with no agent is found, a new node is automatically added to the topology.
Can Soil environments include more than one network / topology?
###############################################################
Yes, but each network has to be included manually.
Somewhere between 0.20 and 0.30 we included the ability to include multiple networks, but it was deemed too complex and was removed.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 7.0 KiB

BIN
docs/output_30_0.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 4.7 KiB

BIN
docs/output_34_0.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 11 KiB

BIN
docs/output_49_0.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 26 KiB

BIN
docs/output_50_0.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 31 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 14 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 14 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 14 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 14 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 16 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 16 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 15 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 15 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 16 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 16 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 16 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 16 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 13 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 13 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 13 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 13 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 13 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 13 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 13 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 13 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 13 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 13 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 15 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 14 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 14 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 5.3 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 17 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 17 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 16 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 11 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 19 KiB

View File

@@ -1,94 +0,0 @@
Quickstart
----------
This section shows how to run your first simulation with Soil.
For installation instructions, see :doc:`installation`.
There are mainly two parts in a simulation: agent classes and simulation configuration.
An agent class defines how the agent will behave throughout the simulation.
The configuration includes things such as number of agents to use and their type, network topology to use, etc.
.. image:: soil.png
:width: 80%
:align: center
Soil includes several agent classes in the ``soil.agents`` module, and we will use them in this quickstart.
If you are interested in developing your own agents classes, see :doc:`soil_tutorial`.
The configuration is the following:
.. literalinclude:: quickstart.yml
:language: yaml
Configuration
=============
You may :download:`download the file <quickstart.yml>` directly.
The agent type used, SISa, is a very simple model.
It only has three states (neutral, content and discontent),
Its parameters are the probabilities to change from one state to another, either spontaneously or because of contagion from neighboring agents.
Running the simulation
======================
To see the simulation in action, simply point soil to the configuration, and tell it to store the graph and the history of agent states and environment parameters at every point.
.. code::
soil --graph --csv quickstart.yml [13:35:29]
INFO:soil:Using config(s): quickstart
INFO:soil:Dumping results to soil_output/quickstart : ['csv', 'gexf']
INFO:soil:Starting simulation quickstart at 13:35:30.
INFO:soil:Starting Simulation quickstart trial 0 at 13:35:30.
INFO:soil:Finished Simulation quickstart trial 0 at 13:35:49 in 19.43677067756653 seconds
INFO:soil:Starting Dumping simulation quickstart trial 0 at 13:35:49.
INFO:soil:Finished Dumping simulation quickstart trial 0 at 13:35:51 in 1.7733407020568848 seconds
INFO:soil:Dumping results to soil_output/quickstart
INFO:soil:Finished simulation quickstart at 13:35:51 in 21.29862952232361 seconds
The ``CSV`` file should look like this:
.. code::
agent_id,t_step,key,value
env,0,neutral_discontent_spon_prob,0.05
env,0,neutral_discontent_infected_prob,0.1
env,0,neutral_content_spon_prob,0.2
env,0,neutral_content_infected_prob,0.4
env,0,discontent_neutral,0.2
env,0,discontent_content,0.05
env,0,content_discontent,0.05
env,0,variance_d_c,0.05
env,0,variance_c_d,0.1
Results and visualization
=========================
The environment variables are marked as ``agent_id`` env.
Th exported values are only stored when they change.
To find out how to get every key and value at every point in the simulation, check out the :doc:`soil_tutorial`.
The dynamic graph is exported as a .gexf file which could be visualized with
`Gephi <https://gephi.org/users/download/>`__.
Now it is your turn to experiment with the simulation.
Change some of the parameters, such as the number of agents, the probability of becoming content, or the type of network, and see how the results change.
Soil also includes a web server that allows you to upload your simulations, change parameters, and visualize the results, including a timeline of the network.
To make it work, you have to install soil like this:
```
pip install soil[web]
```
Once installed, the soil web UI can be run in two ways:
```
soil-web
OR
python -m soil.web
```

View File

@@ -1,30 +0,0 @@
---
name: quickstart
num_trials: 1
max_time: 1000
network_agents:
- agent_type: SISaModel
state:
id: neutral
weight: 1
- agent_type: SISaModel
state:
id: content
weight: 2
network_params:
n: 100
k: 5
p: 0.2
generator: newman_watts_strogatz_graph
environment_params:
neutral_discontent_spon_prob: 0.05
neutral_discontent_infected_prob: 0.1
neutral_content_spon_prob: 0.2
neutral_content_infected_prob: 0.4
discontent_neutral: 0.2
discontent_content: 0.05
content_discontent: 0.05
variance_d_c: 0.05
variance_c_d: 0.1
content_neutral: 0.1
standard_variance: 0.1

1
docs/requirements.txt Normal file
View File

@@ -0,0 +1 @@
ipython>=7.31.1

12
docs/soil-vs.rst Normal file
View File

@@ -0,0 +1,12 @@
### MESA
Starting with version 0.3, Soil has been redesigned to complement Mesa, while remaining compatible with it.
That means that every component in Soil (i.e., Models, Environments, etc.) can be mixed with existing mesa components.
In fact, there are examples that show how that integration may be used, in the `examples/mesa` folder in the repository.
Here are some reasons to use Soil instead of plain mesa:
- Less boilerplate for common scenarios (by some definitions of common)
- Functions to automatically populate a topology with an agent distribution (i.e., different ratios of agent class and state)
- The `soil.Simulation` class allows you to run multiple instances of the same experiment (i.e., multiple trials with the same parameters but a different randomness seed)
- Reporting functions that aggregate multiple

File diff suppressed because it is too large Load Diff

File diff suppressed because one or more lines are too long

File diff suppressed because it is too large Load Diff

View File

@@ -1,28 +0,0 @@
---
name: simple
dir_path: "/tmp/"
num_trials: 3
dry_run: True
max_time: 100
interval: 1
seed: "CompleteSeed!"
dump: false
network_params:
generator: complete_graph
n: 10
network_agents:
- agent_type: CounterModel
weight: 1
state:
id: 0
- agent_type: AggregatedCounter
weight: 0.2
environment_agents: []
environment_class: Environment
environment_params:
am_i_complete: true
default_state:
incidents: 0
states:
- name: 'The first node'
- name: 'The second node'

View File

@@ -0,0 +1,39 @@
from networkx import Graph
import random
import networkx as nx
from soil import Simulation, Environment, CounterModel, parameters
def mygenerator(n=5, n_edges=5):
"""
Just a simple generator that creates a network with n nodes and
n_edges edges. Edges are assigned randomly, only avoiding self loops.
"""
G = nx.Graph()
for i in range(n):
G.add_node(i)
for i in range(n_edges):
nodes = list(G.nodes)
n_in = random.choice(nodes)
nodes.remove(n_in) # Avoid loops
n_out = random.choice(nodes)
G.add_edge(n_in, n_out)
return G
class GeneratorEnv(Environment):
"""Using a custom generator for the network"""
generator: parameters.function = staticmethod(mygenerator)
def init(self):
self.create_network(generator=self.generator, n=10, n_edges=5)
self.add_agents(CounterModel)
sim = Simulation(model=GeneratorEnv, max_steps=10, interval=1)
if __name__ == '__main__':
sim.run(dump=False)

View File

@@ -0,0 +1,41 @@
from soil.agents import FSM, state, default_state
from soil.time import Delta
class Fibonacci(FSM):
"""Agent that only executes in t_steps that are Fibonacci numbers"""
prev = 1
@default_state
@state
def counting(self):
self.log("Stopping at {}".format(self.now))
prev, self["prev"] = self["prev"], max([self.now, self["prev"]])
return None, Delta(prev)
class Odds(FSM):
"""Agent that only executes in odd t_steps"""
@default_state
@state
def odds(self):
self.log("Stopping at {}".format(self.now))
return None, Delta(1 + self.now % 2)
from soil import Environment, Simulation
from networkx import complete_graph
class TimeoutsEnv(Environment):
def init(self):
self.create_network(generator=complete_graph, n=2)
self.add_agent(agent_class=Fibonacci, node_id=0)
self.add_agent(agent_class=Odds, node_id=1)
sim = Simulation(model=TimeoutsEnv, max_steps=10, interval=1)
if __name__ == "__main__":
sim.run(dump=False)

View File

@@ -0,0 +1,9 @@
This example can be run like with command-line options, like this:
```bash
python cars.py --level DEBUG -e summary --csv
#or
soil cars.py -e summary
```
This will set the `CSV` (save the agent and model data to a CSV) and `summary` (print the a summary of the data to stdout) exporters, and set the log level to DEBUG.

View File

@@ -0,0 +1,231 @@
"""
This is an example of a simplified city, where there are Passengers and Drivers that can take those passengers
from their location to their desired location.
An example scenario could play like the following:
- Drivers start in the `wandering` state, where they wander around the city until they have been assigned a journey
- Passenger(1) tells every driver that it wants to request a Journey.
- Each driver receives the request.
If Driver(2) is interested in providing the Journey, it asks Passenger(1) to confirm that it accepts Driver(2)'s request
- When Passenger(1) accepts the request, two things happen:
- Passenger(1) changes its state to `driving_home`
- Driver(2) starts moving towards the origin of the Journey
- Once Driver(2) reaches the origin, it starts moving itself and Passenger(1) to the destination of the Journey
- When Driver(2) reaches the destination (carrying Passenger(1) along):
- Driver(2) starts wondering again
- Passenger(1) dies, and is removed from the simulation
- If there are no more passengers available in the simulation, Drivers die
"""
from __future__ import annotations
from typing import Optional
from soil import *
from soil import events
from mesa.space import MultiGrid
# More complex scenarios may use more than one type of message between objects.
# A common pattern is to use `enum.Enum` to represent state changes in a request.
@dataclass
class Journey:
"""
This represents a request for a journey. Passengers and drivers exchange this object.
A journey may have a driver assigned or not. If the driver has not been assigned, this
object is considered a "request for a journey".
"""
origin: (int, int)
destination: (int, int)
tip: float
passenger: Passenger
driver: Optional[Driver] = None
class City(EventedEnvironment):
"""
An environment with a grid where drivers and passengers will be placed.
The number of drivers and riders is configurable through its parameters:
:param str n_cars: The total number of drivers to add
:param str n_passengers: The number of passengers in the simulation
:param list agents: Specific agents to use in the simulation. It overrides the `n_passengers`
and `n_cars` params.
:param int height: Height of the internal grid
:param int width: Width of the internal grid
"""
n_cars = 1
n_passengers = 10
height = 100
width = 100
def init(self):
self.grid = MultiGrid(width=self.width, height=self.height, torus=False)
if not self.agents:
self.add_agents(Driver, k=self.n_cars)
self.add_agents(Passenger, k=self.n_passengers)
for agent in self.agents:
self.grid.place_agent(agent, (0, 0))
self.grid.move_to_empty(agent)
self.total_earnings = 0
self.add_model_reporter("total_earnings")
@report
@property
def number_passengers(self):
return self.count_agents(agent_class=Passenger)
class Driver(Evented, FSM):
pos = None
journey = None
earnings = 0
def on_receive(self, msg, sender):
"""This is not a state. It will run (and block) every time check_messages is invoked"""
if self.journey is None and isinstance(msg, Journey) and msg.driver is None:
msg.driver = self
self.journey = msg
def check_passengers(self):
"""If there are no more passengers, stop forever"""
c = self.count_agents(agent_class=Passenger)
self.debug(f"Passengers left {c}")
if not c:
self.die("No more passengers")
@default_state
@state
def wandering(self):
"""Move around the city until a journey is accepted"""
target = None
self.check_passengers()
self.journey = None
while self.journey is None: # No potential journeys detected (see on_receive)
if target is None or not self.move_towards(target):
target = self.random.choice(
self.model.grid.get_neighborhood(self.pos, moore=False)
)
self.check_passengers()
# This will call on_receive behind the scenes, and the agent's status will be updated
self.check_messages()
yield Delta(30) # Wait at least 30 seconds before checking again
try:
# Re-send the journey to the passenger, to confirm that we have been selected
self.journey = yield self.journey.passenger.ask(self.journey, timeout=60)
except events.TimedOut:
# No journey has been accepted. Try again
self.journey = None
return
return self.driving
@state
def driving(self):
"""The journey has been accepted. Pick them up and take them to their destination"""
self.info(f"Driving towards Passenger {self.journey.passenger.unique_id}")
while self.move_towards(self.journey.origin):
yield
self.info(f"Driving {self.journey.passenger.unique_id} from {self.journey.origin} to {self.journey.destination}")
while self.move_towards(self.journey.destination, with_passenger=True):
yield
self.info("Arrived at destination")
self.earnings += self.journey.tip
self.model.total_earnings += self.journey.tip
self.check_passengers()
return self.wandering
def move_towards(self, target, with_passenger=False):
"""Move one cell at a time towards a target"""
self.debug(f"Moving { self.pos } -> { target }")
if target[0] == self.pos[0] and target[1] == self.pos[1]:
return False
next_pos = [self.pos[0], self.pos[1]]
for idx in [0, 1]:
if self.pos[idx] < target[idx]:
next_pos[idx] += 1
break
if self.pos[idx] > target[idx]:
next_pos[idx] -= 1
break
self.model.grid.move_agent(self, tuple(next_pos))
if with_passenger:
self.journey.passenger.pos = (
self.pos
) # This could be communicated through messages
return True
class Passenger(Evented, FSM):
pos = None
def on_receive(self, msg, sender):
"""This is not a state. It will be run synchronously every time `check_messages` is run"""
if isinstance(msg, Journey):
self.journey = msg
return msg
@default_state
@state
def asking(self):
destination = (
self.random.randint(0, self.model.grid.height-1),
self.random.randint(0, self.model.grid.width-1),
)
self.journey = None
journey = Journey(
origin=self.pos,
destination=destination,
tip=self.random.randint(10, 100),
passenger=self,
)
timeout = 60
expiration = self.now + timeout
self.info(f"Asking for journey at: { self.pos }")
self.model.broadcast(journey, ttl=timeout, sender=self, agent_class=Driver)
while not self.journey:
self.debug(f"Waiting for responses at: { self.pos }")
try:
# This will call check_messages behind the scenes, and the agent's status will be updated
# If you want to avoid that, you can call it with: check=False
yield self.received(expiration=expiration)
except events.TimedOut:
self.info(f"Still no response. Waiting at: { self.pos }")
self.model.broadcast(
journey, ttl=timeout, sender=self, agent_class=Driver
)
expiration = self.now + timeout
self.info(f"Got a response! Waiting for driver")
return self.driving_home
@state
def driving_home(self):
while (
self.pos[0] != self.journey.destination[0]
or self.pos[1] != self.journey.destination[1]
):
try:
yield self.received(timeout=60)
except events.TimedOut:
pass
self.die("Got home safe!")
simulation = Simulation(name="RideHailing",
model=City,
seed="carsSeed",
max_time=1000,
parameters=dict(n_passengers=2))
if __name__ == "__main__":
easy(simulation)

View File

@@ -0,0 +1,7 @@
from soil import Simulation
from social_wealth import MoneyEnv, graph_generator
sim = Simulation(name="mesa_sim", dump=False, max_steps=10, interval=2, model=MoneyEnv, parameters=dict(generator=graph_generator, N=10, width=50, height=50))
if __name__ == "__main__":
sim.run()

111
examples/mesa/server.py Normal file
View File

@@ -0,0 +1,111 @@
from mesa.visualization.ModularVisualization import ModularServer
from mesa.visualization.UserParam import Slider, Choice
from mesa.visualization.modules import ChartModule, NetworkModule, CanvasGrid
from social_wealth import MoneyEnv, graph_generator, SocialMoneyAgent
import networkx as nx
class MyNetwork(NetworkModule):
def render(self, model):
return self.portrayal_method(model)
def network_portrayal(env):
# The model ensures there is 0 or 1 agent per node
portrayal = dict()
wealths = {
node_id: data["agent"].wealth for (node_id, data) in env.G.nodes(data=True)
}
portrayal["nodes"] = [
{
"id": node_id,
"size": 2 * (wealth + 1),
"color": "#CC0000" if wealth == 0 else "#007959",
# "color": "#CC0000",
"label": f"{node_id}: {wealth}",
}
for (node_id, wealth) in wealths.items()
]
portrayal["edges"] = [
{"id": edge_id, "source": source, "target": target, "color": "#000000"}
for edge_id, (source, target) in enumerate(env.G.edges)
]
return portrayal
def gridPortrayal(agent):
"""
This function is registered with the visualization server to be called
each tick to indicate how to draw the agent in its current state.
:param agent: the agent in the simulation
:return: the portrayal dictionary
"""
color = max(10, min(agent.wealth * 10, 100))
return {
"Shape": "rect",
"w": 1,
"h": 1,
"Filled": "true",
"Layer": 0,
"Label": agent.unique_id,
"Text": agent.unique_id,
"x": agent.pos[0],
"y": agent.pos[1],
"Color": f"rgba(31, 10, 255, 0.{color})",
}
grid = MyNetwork(network_portrayal, 500, 500)
chart = ChartModule(
[{"Label": "Gini", "Color": "Black"}], data_collector_name="datacollector"
)
parameters = {
"N": Slider(
"N",
5,
1,
10,
1,
description="Choose how many agents to include in the model",
),
"height": Slider(
"height",
5,
5,
10,
1,
description="Grid height",
),
"width": Slider(
"width",
5,
5,
10,
1,
description="Grid width",
),
"agent_class": Choice(
"Agent class",
value="MoneyAgent",
choices=["MoneyAgent", "SocialMoneyAgent"],
),
"generator": graph_generator,
}
canvas_element = CanvasGrid(
gridPortrayal, parameters["width"].value, parameters["height"].value, 500, 500
)
server = ModularServer(
MoneyEnv, [grid, chart, canvas_element], "Money Model", parameters
)
server.port = 8521
if __name__ == '__main__':
server.launch(open_browser=False)

View File

@@ -0,0 +1,137 @@
"""
This is an example that adds soil agents and environment in a normal
mesa workflow.
"""
from mesa import Agent as MesaAgent
from mesa.space import MultiGrid
# from mesa.time import RandomActivation
from mesa.datacollection import DataCollector
from mesa.batchrunner import BatchRunner
import networkx as nx
from soil import NetworkAgent, Environment, serialization
def compute_gini(model):
agent_wealths = [agent.wealth for agent in model.agents]
x = sorted(agent_wealths)
N = len(list(model.agents))
B = sum(xi * (N - i) for i, xi in enumerate(x)) / (N * sum(x))
return 1 + (1 / N) - 2 * B
class MoneyAgent(MesaAgent):
"""
A MESA agent with fixed initial wealth.
It will only share wealth with neighbors based on grid proximity
"""
def __init__(self, unique_id, model, wealth=1, **kwargs):
super().__init__(unique_id=unique_id, model=model)
self.wealth = wealth
def move(self):
possible_steps = self.model.grid.get_neighborhood(
self.pos, moore=True, include_center=False
)
new_position = self.random.choice(possible_steps)
self.model.grid.move_agent(self, new_position)
def give_money(self):
cellmates = self.model.grid.get_cell_list_contents([self.pos])
if len(cellmates) > 1:
other = self.random.choice(cellmates)
other.wealth += 1
self.wealth -= 1
def step(self):
print("Crying wolf", self.pos)
self.move()
if self.wealth > 0:
self.give_money()
class SocialMoneyAgent(MoneyAgent, NetworkAgent):
wealth = 1
def give_money(self):
cellmates = set(self.model.grid.get_cell_list_contents([self.pos]))
friends = set(self.get_neighbors())
self.info("Trying to give money")
self.info("Cellmates: ", cellmates)
self.info("Friends: ", friends)
nearby_friends = list(cellmates & friends)
if len(nearby_friends):
other = self.random.choice(nearby_friends)
other.wealth += 1
self.wealth -= 1
def graph_generator(n=5):
G = nx.Graph()
for ix in range(n):
G.add_edge(0, ix)
return G
class MoneyEnv(Environment):
"""A model with some number of agents."""
def __init__(
self,
width,
height,
N,
generator=graph_generator,
agent_class=SocialMoneyAgent,
topology=None,
**kwargs
):
generator = serialization.deserialize(generator)
agent_class = serialization.deserialize(agent_class, globs=globals())
topology = generator(n=N)
super().__init__(topology=topology, N=N, **kwargs)
self.grid = MultiGrid(width, height, False)
self.populate_network(agent_class=agent_class)
# Create agents
for agent in self.agents:
x = self.random.randrange(self.grid.width)
y = self.random.randrange(self.grid.height)
self.grid.place_agent(agent, (x, y))
self.datacollector = DataCollector(
model_reporters={"Gini": compute_gini}, agent_reporters={"Wealth": "wealth"}
)
if __name__ == "__main__":
fixed_params = {
"generator": nx.complete_graph,
"width": 10,
"network_agents": [{"agent_class": SocialMoneyAgent, "weight": 1}],
"height": 10,
}
variable_params = {"N": range(10, 100, 10)}
batch_run = BatchRunner(
MoneyEnv,
variable_parameters=variable_params,
fixed_parameters=fixed_params,
iterations=5,
max_steps=100,
model_reporters={"Gini": compute_gini},
)
batch_run.run_all()
run_data = batch_run.get_model_vars_dataframe()
run_data.head()
print(run_data.Gini)

87
examples/mesa/wealth.py Normal file
View File

@@ -0,0 +1,87 @@
from mesa import Agent, Model
from mesa.space import MultiGrid
from mesa.time import RandomActivation
from mesa.datacollection import DataCollector
from mesa.batchrunner import BatchRunner
def compute_gini(model):
agent_wealths = [agent.wealth for agent in model.schedule.agents]
x = sorted(agent_wealths)
N = model.num_agents
B = sum(xi * (N - i) for i, xi in enumerate(x)) / (N * sum(x))
return 1 + (1 / N) - 2 * B
class MoneyAgent(Agent):
"""An agent with fixed initial wealth."""
def __init__(self, unique_id, model):
super().__init__(unique_id, model)
self.wealth = 1
def move(self):
possible_steps = self.model.grid.get_neighborhood(
self.pos, moore=True, include_center=False
)
new_position = self.random.choice(possible_steps)
self.model.grid.move_agent(self, new_position)
def give_money(self):
cellmates = self.model.grid.get_cell_list_contents([self.pos])
if len(cellmates) > 1:
other = self.random.choice(cellmates)
other.wealth += 1
self.wealth -= 1
def step(self):
self.move()
if self.wealth > 0:
self.give_money()
class MoneyModel(Model):
"""A model with some number of agents."""
def __init__(self, N, width, height):
self.num_agents = N
self.grid = MultiGrid(width, height, True)
self.schedule = RandomActivation(self)
self.running = True
# Create agents
for i in range(self.num_agents):
a = MoneyAgent(i, self)
self.schedule.add(a)
# Add the agent to a random grid cell
x = self.random.randrange(self.grid.width)
y = self.random.randrange(self.grid.height)
self.grid.place_agent(a, (x, y))
self.datacollector = DataCollector(
model_reporters={"Gini": compute_gini}, agent_reporters={"Wealth": "wealth"}
)
def step(self):
self.datacollector.collect(self)
self.schedule.step()
if __name__ == "__main__":
fixed_params = {"width": 10, "height": 10}
variable_params = {"N": range(10, 500, 10)}
batch_run = BatchRunner(
MoneyModel,
variable_params,
fixed_params,
iterations=5,
max_steps=100,
model_reporters={"Gini": compute_gini},
)
batch_run.run_all()
run_data = batch_run.get_model_vars_dataframe()
run_data.head()
print(run_data.Gini)

View File

@@ -89,11 +89,11 @@
"max_time: 30\r\n",
"name: Sim_all_dumb\r\n",
"network_agents:\r\n",
"- agent_type: DumbViewer\r\n",
"- agent_class: DumbViewer\r\n",
" state:\r\n",
" has_tv: false\r\n",
" weight: 1\r\n",
"- agent_type: DumbViewer\r\n",
"- agent_class: DumbViewer\r\n",
" state:\r\n",
" has_tv: true\r\n",
" weight: 1\r\n",
@@ -113,19 +113,19 @@
"max_time: 30\r\n",
"name: Sim_half_herd\r\n",
"network_agents:\r\n",
"- agent_type: DumbViewer\r\n",
"- agent_class: DumbViewer\r\n",
" state:\r\n",
" has_tv: false\r\n",
" weight: 1\r\n",
"- agent_type: DumbViewer\r\n",
"- agent_class: DumbViewer\r\n",
" state:\r\n",
" has_tv: true\r\n",
" weight: 1\r\n",
"- agent_type: HerdViewer\r\n",
"- agent_class: HerdViewer\r\n",
" state:\r\n",
" has_tv: false\r\n",
" weight: 1\r\n",
"- agent_type: HerdViewer\r\n",
"- agent_class: HerdViewer\r\n",
" state:\r\n",
" has_tv: true\r\n",
" weight: 1\r\n",
@@ -145,12 +145,12 @@
"max_time: 30\r\n",
"name: Sim_all_herd\r\n",
"network_agents:\r\n",
"- agent_type: HerdViewer\r\n",
"- agent_class: HerdViewer\r\n",
" state:\r\n",
" has_tv: true\r\n",
" id: neutral\r\n",
" weight: 1\r\n",
"- agent_type: HerdViewer\r\n",
"- agent_class: HerdViewer\r\n",
" state:\r\n",
" has_tv: true\r\n",
" id: neutral\r\n",
@@ -172,12 +172,12 @@
"max_time: 30\r\n",
"name: Sim_wise_herd\r\n",
"network_agents:\r\n",
"- agent_type: HerdViewer\r\n",
"- agent_class: HerdViewer\r\n",
" state:\r\n",
" has_tv: true\r\n",
" id: neutral\r\n",
" weight: 1\r\n",
"- agent_type: WiseViewer\r\n",
"- agent_class: WiseViewer\r\n",
" state:\r\n",
" has_tv: true\r\n",
" weight: 1\r\n",
@@ -198,12 +198,12 @@
"max_time: 30\r\n",
"name: Sim_all_wise\r\n",
"network_agents:\r\n",
"- agent_type: WiseViewer\r\n",
"- agent_class: WiseViewer\r\n",
" state:\r\n",
" has_tv: true\r\n",
" id: neutral\r\n",
" weight: 1\r\n",
"- agent_type: WiseViewer\r\n",
"- agent_class: WiseViewer\r\n",
" state:\r\n",
" has_tv: true\r\n",
" weight: 1\r\n",

View File

@@ -1,138 +0,0 @@
---
default_state: {}
load_module: newsspread
environment_agents: []
environment_params:
prob_neighbor_spread: 0.0
prob_tv_spread: 0.01
interval: 1
max_time: 30
name: Sim_all_dumb
network_agents:
- agent_type: DumbViewer
state:
has_tv: false
weight: 1
- agent_type: DumbViewer
state:
has_tv: true
weight: 1
network_params:
generator: barabasi_albert_graph
n: 500
m: 5
num_trials: 50
---
default_state: {}
load_module: newsspread
environment_agents: []
environment_params:
prob_neighbor_spread: 0.0
prob_tv_spread: 0.01
interval: 1
max_time: 30
name: Sim_half_herd
network_agents:
- agent_type: DumbViewer
state:
has_tv: false
weight: 1
- agent_type: DumbViewer
state:
has_tv: true
weight: 1
- agent_type: HerdViewer
state:
has_tv: false
weight: 1
- agent_type: HerdViewer
state:
has_tv: true
weight: 1
network_params:
generator: barabasi_albert_graph
n: 500
m: 5
num_trials: 50
---
default_state: {}
load_module: newsspread
environment_agents: []
environment_params:
prob_neighbor_spread: 0.0
prob_tv_spread: 0.01
interval: 1
max_time: 30
name: Sim_all_herd
network_agents:
- agent_type: HerdViewer
state:
has_tv: true
id: neutral
weight: 1
- agent_type: HerdViewer
state:
has_tv: true
id: neutral
weight: 1
network_params:
generator: barabasi_albert_graph
n: 500
m: 5
num_trials: 50
---
default_state: {}
load_module: newsspread
environment_agents: []
environment_params:
prob_neighbor_spread: 0.0
prob_tv_spread: 0.01
prob_neighbor_cure: 0.1
interval: 1
max_time: 30
name: Sim_wise_herd
network_agents:
- agent_type: HerdViewer
state:
has_tv: true
id: neutral
weight: 1
- agent_type: WiseViewer
state:
has_tv: true
weight: 1
network_params:
generator: barabasi_albert_graph
n: 500
m: 5
num_trials: 50
---
default_state: {}
load_module: newsspread
environment_agents: []
environment_params:
prob_neighbor_spread: 0.0
prob_tv_spread: 0.01
prob_neighbor_cure: 0.1
interval: 1
max_time: 30
name: Sim_all_wise
network_agents:
- agent_type: WiseViewer
state:
has_tv: true
id: neutral
weight: 1
- agent_type: WiseViewer
state:
has_tv: true
weight: 1
network_params:
generator: barabasi_albert_graph
n: 500
m: 5
network_params:
generator: barabasi_albert_graph
n: 500
m: 5
num_trials: 50

View File

@@ -1,81 +0,0 @@
from soil.agents import FSM, state, default_state, prob
import logging
class DumbViewer(FSM):
'''
A viewer that gets infected via TV (if it has one) and tries to infect
its neighbors once it's infected.
'''
defaults = {
'prob_neighbor_spread': 0.5,
'prob_tv_spread': 0.1,
}
@default_state
@state
def neutral(self):
if self['has_tv']:
if prob(self.env['prob_tv_spread']):
self.set_state(self.infected)
@state
def infected(self):
for neighbor in self.get_neighboring_agents(state_id=self.neutral.id):
if prob(self.env['prob_neighbor_spread']):
neighbor.infect()
def infect(self):
self.set_state(self.infected)
class HerdViewer(DumbViewer):
'''
A viewer whose probability of infection depends on the state of its neighbors.
'''
level = logging.DEBUG
def infect(self):
infected = self.count_neighboring_agents(state_id=self.infected.id)
total = self.count_neighboring_agents()
prob_infect = self.env['prob_neighbor_spread'] * infected/total
self.debug('prob_infect', prob_infect)
if prob(prob_infect):
self.set_state(self.infected.id)
class WiseViewer(HerdViewer):
'''
A viewer that can change its mind.
'''
defaults = {
'prob_neighbor_spread': 0.5,
'prob_neighbor_cure': 0.25,
'prob_tv_spread': 0.1,
}
@state
def cured(self):
prob_cure = self.env['prob_neighbor_cure']
for neighbor in self.get_neighboring_agents(state_id=self.infected.id):
if prob(prob_cure):
try:
neighbor.cure()
except AttributeError:
self.debug('Viewer {} cannot be cured'.format(neighbor.id))
def cure(self):
self.set_state(self.cured.id)
@state
def infected(self):
cured = max(self.count_neighboring_agents(self.cured.id),
1.0)
infected = max(self.count_neighboring_agents(self.infected.id),
1.0)
prob_cure = self.env['prob_neighbor_cure'] * (cured/infected)
if prob(prob_cure):
return self.cure()
return self.set_state(super().infected)

View File

@@ -0,0 +1,134 @@
from soil.agents import FSM, NetworkAgent, state, default_state, prob
from soil.parameters import *
import logging
from soil.environment import Environment
class DumbViewer(FSM, NetworkAgent):
"""
A viewer that gets infected via TV (if it has one) and tries to infect
its neighbors once it's infected.
"""
has_been_infected: bool = False
has_tv: bool = False
@default_state
@state
def neutral(self):
if self.has_tv:
if self.prob(self.get("prob_tv_spread")):
return self.infected
if self.has_been_infected:
return self.infected
@state
def infected(self):
for neighbor in self.get_neighbors(state_id=self.neutral.id):
if self.prob(self.get("prob_neighbor_spread")):
neighbor.infect()
def infect(self):
"""
This is not a state. It is a function that other agents can use to try to
infect this agent. DumbViewer always gets infected, but other agents like
HerdViewer might not become infected right away
"""
self.has_been_infected = True
class HerdViewer(DumbViewer):
"""
A viewer whose probability of infection depends on the state of its neighbors.
"""
def infect(self):
"""Notice again that this is NOT a state. See DumbViewer.infect for reference"""
infected = self.count_neighbors(state_id=self.infected.id)
total = self.count_neighbors()
prob_infect = self.get("prob_neighbor_spread") * infected / total
self.debug("prob_infect", prob_infect)
if self.prob(prob_infect):
self.has_been_infected = True
class WiseViewer(HerdViewer):
"""
A viewer that can change its mind.
"""
@state
def cured(self):
prob_cure = self.get("prob_neighbor_cure")
for neighbor in self.get_neighbors(state_id=self.infected.id):
if self.prob(prob_cure):
try:
neighbor.cure()
except AttributeError:
self.debug("Viewer {} cannot be cured".format(neighbor.id))
def cure(self):
self.has_been_cured = True
@state
def infected(self):
if self.has_been_cured:
return self.cured
cured = max(self.count_neighbors(self.cured.id), 1.0)
infected = max(self.count_neighbors(self.infected.id), 1.0)
prob_cure = self.get("prob_neighbor_cure") * (cured / infected)
if self.prob(prob_cure):
return self.cured
class NewsSpread(Environment):
ratio_dumb: probability = 1,
ratio_herd: probability = 0,
ratio_wise: probability = 0,
prob_tv_spread: probability = 0.1,
prob_neighbor_spread: probability = 0.1,
prob_neighbor_cure: probability = 0.05,
def init(self):
self.populate_network([DumbViewer, HerdViewer, WiseViewer],
[self.ratio_dumb, self.ratio_herd, self.ratio_wise])
from itertools import product
from soil import Simulation
# We want to investigate the effect of different agent distributions on the spread of news.
# To do that, we will run different simulations, with a varying ratio of DumbViewers, HerdViewers, and WiseViewers
# Because the effect of these agents might also depend on the network structure, we will run our simulations on two different networks:
# one with a small-world structure and one with a connected structure.
counter = 0
for [r1, r2] in product([0, 0.5, 1.0], repeat=2):
for (generator, netparams) in {
"barabasi_albert_graph": {"m": 5},
"erdos_renyi_graph": {"p": 0.1},
}.items():
print(r1, r2, 1-r1-r2, generator)
# Create new simulation
netparams["n"] = 500
Simulation(
name='newspread_sim',
model=NewsSpread,
parameters=dict(
ratio_dumb=r1,
ratio_herd=r2,
ratio_wise=1-r1-r2,
network_generator=generator,
network_params=netparams,
prob_neighbor_spread=0,
),
iterations=5,
max_steps=300,
dump=False,
).run()
counter += 1
# Run all the necessary instances
print(f"A total of {counter} simulations were run.")

1
examples/programmatic/.gitignore vendored Normal file
View File

@@ -0,0 +1 @@
Programmatic*

View File

@@ -0,0 +1,53 @@
"""
Example of a fully programmatic simulation, without definition files.
"""
from soil import Simulation, Environment, agents
from networkx import Graph
import logging
def mygenerator():
# Add only a node
G = Graph()
G.add_node(1)
G.add_node(2)
return G
class MyAgent(agents.NetworkAgent, agents.FSM):
times_run = 0
@agents.default_state
@agents.state
def neutral(self):
self.debug("I am running")
if self.prob(0.2):
self.times_run += 1
self.info("This runs 2/10 times on average")
class ProgrammaticEnv(Environment):
def init(self):
self.create_network(generator=mygenerator)
assert len(self.G)
self.populate_network(agent_class=MyAgent)
self.add_agent_reporter('times_run')
simulation = Simulation(
name="Programmatic",
model=ProgrammaticEnv,
seed='Program',
iterations=1,
max_time=100,
dump=False,
)
if __name__ == "__main__":
# By default, logging will only print WARNING logs (and above).
# You need to choose a lower logging level to get INFO/DEBUG traces
logging.basicConfig(level=logging.INFO)
envs = simulation.run()
for agent in envs[0].agents:
print(agent.times_run)

View File

@@ -1,174 +0,0 @@
from soil.agents import FSM, state, default_state
from soil import Environment
from random import random, shuffle
from itertools import islice
import logging
class CityPubs(Environment):
'''Environment with Pubs'''
level = logging.INFO
def __init__(self, *args, number_of_pubs=3, pub_capacity=10, **kwargs):
super(CityPubs, self).__init__(*args, **kwargs)
pubs = {}
for i in range(number_of_pubs):
newpub = {
'name': 'The awesome pub #{}'.format(i),
'open': True,
'capacity': pub_capacity,
'occupancy': 0,
}
pubs[newpub['name']] = newpub
self['pubs'] = pubs
def enter(self, pub_id, *nodes):
'''Agents will try to enter. The pub checks if it is possible'''
try:
pub = self['pubs'][pub_id]
except KeyError:
raise ValueError('Pub {} is not available'.format(pub_id))
if not pub['open'] or (pub['capacity'] < (len(nodes) + pub['occupancy'])):
return False
pub['occupancy'] += len(nodes)
for node in nodes:
node['pub'] = pub_id
return True
def available_pubs(self):
for pub in self['pubs'].values():
if pub['open'] and (pub['occupancy'] < pub['capacity']):
yield pub['name']
def exit(self, pub_id, *node_ids):
'''Agents will notify the pub they want to leave'''
try:
pub = self['pubs'][pub_id]
except KeyError:
raise ValueError('Pub {} is not available'.format(pub_id))
for node_id in node_ids:
node = self.get_agent(node_id)
if pub_id == node['pub']:
del node['pub']
pub['occupancy'] -= 1
class Patron(FSM):
'''Agent that looks for friends to drink with. It will do three things:
1) Look for other patrons to drink with
2) Look for a bar where the agent and other agents in the same group can get in.
3) While in the bar, patrons only drink, until they get drunk and taken home.
'''
level = logging.INFO
defaults = {
'pub': None,
'drunk': False,
'pints': 0,
'max_pints': 3,
}
@default_state
@state
def looking_for_friends(self):
'''Look for friends to drink with'''
self.info('I am looking for friends')
available_friends = list(self.get_agents(drunk=False,
pub=None,
state_id=self.looking_for_friends.id))
if not available_friends:
self.info('Life sucks and I\'m alone!')
return self.at_home
befriended = self.try_friends(available_friends)
if befriended:
return self.looking_for_pub
@state
def looking_for_pub(self):
'''Look for a pub that accepts me and my friends'''
if self['pub'] != None:
return self.sober_in_pub
self.debug('I am looking for a pub')
group = list(self.get_neighboring_agents())
for pub in self.env.available_pubs():
self.debug('We\'re trying to get into {}: total: {}'.format(pub, len(group)))
if self.env.enter(pub, self, *group):
self.info('We\'re all {} getting in {}!'.format(len(group), pub))
return self.sober_in_pub
@state
def sober_in_pub(self):
'''Drink up.'''
self.drink()
if self['pints'] > self['max_pints']:
return self.drunk_in_pub
@state
def drunk_in_pub(self):
'''I'm out. Take me home!'''
self.info('I\'m so drunk. Take me home!')
self['drunk'] = True
pass # out drunk
@state
def at_home(self):
'''The end'''
self.debug('Life sucks. I\'m home!')
def drink(self):
self['pints'] += 1
self.debug('Cheers to that')
def kick_out(self):
self.set_state(self.at_home)
def befriend(self, other_agent, force=False):
'''
Try to become friends with another agent. The chances of
success depend on both agents' openness.
'''
if force or self['openness'] > random():
self.env.add_edge(self, other_agent)
self.info('Made some friend {}'.format(other_agent))
return True
return False
def try_friends(self, others):
''' Look for random agents around me and try to befriend them'''
befriended = False
k = int(10*self['openness'])
shuffle(others)
for friend in islice(others, k): # random.choice >= 3.7
if friend == self:
continue
if friend.befriend(self):
self.befriend(friend, force=True)
self.debug('Hooray! new friend: {}'.format(friend.id))
befriended = True
else:
self.debug('{} does not want to be friends'.format(friend.id))
return befriended
class Police(FSM):
'''Simple agent to take drunk people out of pubs.'''
level = logging.INFO
@default_state
@state
def patrol(self):
drunksters = list(self.get_agents(drunk=True,
state_id=Patron.drunk_in_pub.id))
for drunk in drunksters:
self.info('Kicking out the trash: {}'.format(drunk.id))
drunk.kick_out()
else:
self.info('No trash to take out. Too bad.')
if __name__ == '__main__':
from soil import simulation
simulation.run_from_config('pubcrawl.yml',
dry_run=True,
dump=None,
parallel=False)

View File

@@ -1,26 +0,0 @@
---
name: pubcrawl
num_trials: 3
max_time: 10
dump: false
network_params:
# Generate 100 empty nodes. They will be assigned a network agent
generator: empty_graph
n: 30
network_agents:
- agent_type: pubcrawl.Patron
description: Extroverted patron
state:
openness: 1.0
weight: 9
- agent_type: pubcrawl.Patron
description: Introverted patron
state:
openness: 0.1
weight: 1
environment_agents:
- agent_type: pubcrawl.Police
environment_class: pubcrawl.CityPubs
environment_params:
altercations: 0
number_of_pubs: 3

View File

@@ -0,0 +1,195 @@
from soil.agents import FSM, NetworkAgent, state, default_state
from soil import Environment, Simulation, parameters
from itertools import islice
import networkx as nx
import logging
class CityPubs(Environment):
"""Environment with Pubs"""
level = logging.INFO
number_of_pubs: parameters.Integer = 3
ratio_extroverted: parameters.probability = 0.1
pub_capacity: parameters.Integer = 10
def init(self):
self.pubs = {}
for i in range(self.number_of_pubs):
newpub = {
"name": "The awesome pub #{}".format(i),
"open": True,
"capacity": self.pub_capacity,
"occupancy": 0,
}
self.pubs[newpub["name"]] = newpub
self.add_agent(agent_class=Police)
self.populate_network([Patron.w(openness=0.1), Patron.w(openness=1)],
[self.ratio_extroverted, 1-self.ratio_extroverted])
assert all(["agent" in node and isinstance(node["agent"], Patron) for (_, node) in self.G.nodes(data=True)])
def enter(self, pub_id, *nodes):
"""Agents will try to enter. The pub checks if it is possible"""
try:
pub = self["pubs"][pub_id]
except KeyError:
raise ValueError("Pub {} is not available".format(pub_id))
if not pub["open"] or (pub["capacity"] < (len(nodes) + pub["occupancy"])):
return False
pub["occupancy"] += len(nodes)
for node in nodes:
node["pub"] = pub_id
return True
def available_pubs(self):
for pub in self["pubs"].values():
if pub["open"] and (pub["occupancy"] < pub["capacity"]):
yield pub["name"]
def exit(self, pub_id, *node_ids):
"""Agents will notify the pub they want to leave"""
try:
pub = self["pubs"][pub_id]
except KeyError:
raise ValueError("Pub {} is not available".format(pub_id))
for node_id in node_ids:
node = self.get_agent(node_id)
if pub_id == node["pub"]:
del node["pub"]
pub["occupancy"] -= 1
class Patron(FSM, NetworkAgent):
"""Agent that looks for friends to drink with. It will do three things:
1) Look for other patrons to drink with
2) Look for a bar where the agent and other agents in the same group can get in.
3) While in the bar, patrons only drink, until they get drunk and taken home.
"""
level = logging.DEBUG
pub = None
drunk = False
pints = 0
max_pints = 3
kicked_out = False
@default_state
@state
def looking_for_friends(self):
"""Look for friends to drink with"""
self.info("I am looking for friends")
available_friends = list(
self.get_agents(drunk=False, pub=None, state_id=self.looking_for_friends.id)
)
if not available_friends:
self.info("Life sucks and I'm alone!")
return self.at_home
befriended = self.try_friends(available_friends)
if befriended:
return self.looking_for_pub
@state
def looking_for_pub(self):
"""Look for a pub that accepts me and my friends"""
if self["pub"] != None:
return self.sober_in_pub
self.debug("I am looking for a pub")
group = list(self.get_neighbors())
for pub in self.model.available_pubs():
self.debug("We're trying to get into {}: total: {}".format(pub, len(group)))
if self.model.enter(pub, self, *group):
self.info("We're all {} getting in {}!".format(len(group), pub))
return self.sober_in_pub
@state
def sober_in_pub(self):
"""Drink up."""
self.drink()
if self["pints"] > self["max_pints"]:
return self.drunk_in_pub
@state
def drunk_in_pub(self):
"""I'm out. Take me home!"""
self.info("I'm so drunk. Take me home!")
self["drunk"] = True
if self.kicked_out:
return self.at_home
pass # out drun
@state
def at_home(self):
"""The end"""
others = self.get_agents(state_id=Patron.at_home.id, limit_neighbors=True)
self.debug("I'm home. Just like {} of my friends".format(len(others)))
def drink(self):
self["pints"] += 1
self.debug("Cheers to that")
def kick_out(self):
self.kicked_out = True
def befriend(self, other_agent, force=False):
"""
Try to become friends with another agent. The chances of
success depend on both agents' openness.
"""
if force or self["openness"] > self.random.random():
self.add_edge(self, other_agent)
self.info("Made some friend {}".format(other_agent))
return True
return False
def try_friends(self, others):
"""Look for random agents around me and try to befriend them"""
befriended = False
k = int(10 * self["openness"])
self.random.shuffle(others)
for friend in islice(others, k): # random.choice >= 3.7
if friend == self:
continue
if friend.befriend(self):
self.befriend(friend, force=True)
self.debug("Hooray! new friend: {}".format(friend.unique_id))
befriended = True
else:
self.debug("{} does not want to be friends".format(friend.unique_id))
return befriended
class Police(FSM):
"""Simple agent to take drunk people out of pubs."""
level = logging.INFO
@default_state
@state
def patrol(self):
drunksters = list(self.get_agents(drunk=True, state_id=Patron.drunk_in_pub.id))
for drunk in drunksters:
self.info("Kicking out the trash: {}".format(drunk.unique_id))
drunk.kick_out()
else:
self.info("No trash to take out. Too bad.")
sim = Simulation(
model=CityPubs,
name="pubcrawl",
iterations=3,
max_steps=10,
dump=False,
parameters=dict(
network_generator=nx.empty_graph,
network_params={"n": 30},
model=CityPubs,
altercations=0,
number_of_pubs=3,
)
)
if __name__ == "__main__":
sim.run(parallel=False)

View File

@@ -0,0 +1,14 @@
There are two similar implementations of this simulation.
- `basic`. Using simple primites
- `improved`. Using more advanced features such as the `time` module to avoid unnecessary computations (i.e., skip steps), and generator functions.
The examples can be run directly in the terminal, and they accept command like arguments.
For example, to enable the CSV exporter and the Summary exporter, while setting `max_time` to `100` and `seed` to `CustomSeed`:
```
python rabbit_agents.py --set max_time=100 --csv -e summary --set 'seed="CustomSeed"'
```
To learn more about how this functionality works, check out the `soil.easy` function.

View File

@@ -1,120 +0,0 @@
from soil.agents import FSM, state, default_state, BaseAgent
from enum import Enum
from random import random, choice
from itertools import islice
import logging
import math
class Genders(Enum):
male = 'male'
female = 'female'
class RabbitModel(FSM):
level = logging.INFO
defaults = {
'age': 0,
'gender': Genders.male.value,
'mating_prob': 0.001,
'offspring': 0,
}
sexual_maturity = 4*30
life_expectancy = 365 * 3
gestation = 33
pregnancy = -1
max_females = 5
@default_state
@state
def newborn(self):
self['age'] += 1
if self['age'] >= self.sexual_maturity:
return self.fertile
@state
def fertile(self):
self['age'] += 1
if self['age'] > self.life_expectancy:
return self.dead
if self['gender'] == Genders.female.value:
return
# Males try to mate
females = self.get_agents(state_id=self.fertile.id, gender=Genders.female.value, limit_neighbors=False)
for f in islice(females, self.max_females):
r = random()
if r < self['mating_prob']:
self.impregnate(f)
break # Take a break
def impregnate(self, whom):
if self['gender'] == Genders.female.value:
raise NotImplementedError('Females cannot impregnate')
whom['pregnancy'] = 0
whom['mate'] = self.id
whom.set_state(whom.pregnant)
self.debug('{} impregnating: {}. {}'.format(self.id, whom.id, whom.state))
@state
def pregnant(self):
self['age'] += 1
if self['age'] > self.life_expectancy:
return self.dead
self['pregnancy'] += 1
self.debug('Pregnancy: {}'.format(self['pregnancy']))
if self['pregnancy'] >= self.gestation:
number_of_babies = int(8+4*random())
self.info('Having {} babies'.format(number_of_babies))
for i in range(number_of_babies):
state = {}
state['gender'] = choice(list(Genders)).value
child = self.env.add_node(self.__class__, state)
self.env.add_edge(self.id, child.id)
self.env.add_edge(self['mate'], child.id)
# self.add_edge()
self.debug('A BABY IS COMING TO LIFE')
self.env['rabbits_alive'] = self.env.get('rabbits_alive', self.global_topology.number_of_nodes())+1
self.debug('Rabbits alive: {}'.format(self.env['rabbits_alive']))
self['offspring'] += 1
self.env.get_agent(self['mate'])['offspring'] += 1
del self['mate']
self['pregnancy'] = -1
return self.fertile
@state
def dead(self):
self.info('Agent {} is dying'.format(self.id))
if 'pregnancy' in self and self['pregnancy'] > -1:
self.info('A mother has died carrying a baby!!')
self.die()
return
class RandomAccident(BaseAgent):
level = logging.DEBUG
def step(self):
rabbits_total = self.global_topology.number_of_nodes()
rabbits_alive = self.env.get('rabbits_alive', rabbits_total)
prob_death = self.env.get('prob_death', 1e-100)*math.floor(math.log10(max(1, rabbits_alive)))
self.debug('Killing some rabbits with prob={}!'.format(prob_death))
for i in self.env.network_agents:
if i.state['id'] == i.dead.id:
continue
r = random()
if r < prob_death:
self.debug('I killed a rabbit: {}'.format(i.id))
rabbits_alive = self.env['rabbits_alive'] = rabbits_alive -1
self.log('Rabbits alive: {}'.format(self.env['rabbits_alive']))
i.set_state(i.dead)
self.log('Rabbits alive: {}/{}'.format(rabbits_alive, rabbits_total))
if self.count_agents(state_id=RabbitModel.dead.id) == self.global_topology.number_of_nodes():
self.die()

View File

@@ -0,0 +1,153 @@
from soil import FSM, state, default_state, BaseAgent, NetworkAgent, Environment, Simulation
from soil.time import Delta
from enum import Enum
from collections import Counter
import logging
import math
from rabbits_basic_sim import RabbitEnv
class RabbitsImprovedEnv(RabbitEnv):
def init(self):
"""Initialize the environment with the new versions of the agents"""
a1 = self.add_node(Male)
a2 = self.add_node(Female)
a1.add_edge(a2)
self.add_agent(RandomAccident)
class Rabbit(FSM, NetworkAgent):
sexual_maturity = 30
life_expectancy = 300
birth = None
@property
def age(self):
if self.birth is None:
return None
return self.now - self.birth
@default_state
@state
def newborn(self):
self.info("I am a newborn.")
self.birth = self.now
self.offspring = 0
return self.youngling, Delta(self.sexual_maturity - self.age)
@state
def youngling(self):
if self.age >= self.sexual_maturity:
self.info(f"I am fertile! My age is {self.age}")
return self.fertile
@state
def fertile(self):
raise Exception("Each subclass should define its fertile state")
@state
def dead(self):
self.die()
class Male(Rabbit):
max_females = 5
mating_prob = 0.001
@state
def fertile(self):
if self.age > self.life_expectancy:
return self.dead
# Males try to mate
for f in self.model.agents(
agent_class=Female, state_id=Female.fertile.id, limit=self.max_females
):
self.debug("FOUND A FEMALE: ", repr(f), self.mating_prob)
if self.prob(self["mating_prob"]):
f.impregnate(self)
break # Do not try to impregnate other females
class Female(Rabbit):
gestation = 10
conception = None
@state
def fertile(self):
# Just wait for a Male
if self.age > self.life_expectancy:
return self.dead
if self.conception is not None:
return self.pregnant
@property
def pregnancy(self):
if self.conception is None:
return None
return self.now - self.conception
def impregnate(self, male):
self.info(f"impregnated by {repr(male)}")
self.mate = male
self.conception = self.now
self.number_of_babies = int(8 + 4 * self.random.random())
@state
def pregnant(self):
self.debug("I am pregnant")
if self.age > self.life_expectancy:
self.info("Dying before giving birth")
return self.die()
if self.pregnancy >= self.gestation:
self.info("Having {} babies".format(self.number_of_babies))
for i in range(self.number_of_babies):
state = {}
agent_class = self.random.choice([Male, Female])
child = self.model.add_node(agent_class=agent_class, **state)
child.add_edge(self)
if self.mate:
child.add_edge(self.mate)
self.mate.offspring += 1
else:
self.debug("The father has passed away")
self.offspring += 1
self.mate = None
return self.fertile
def die(self):
if self.pregnancy is not None:
self.info("A mother has died carrying a baby!!")
return super().die()
class RandomAccident(BaseAgent):
def step(self):
rabbits_alive = self.model.G.number_of_nodes()
if not rabbits_alive:
return self.die()
prob_death = self.model.get("prob_death", 1e-100) * math.floor(
math.log10(max(1, rabbits_alive))
)
self.debug("Killing some rabbits with prob={}!".format(prob_death))
for i in self.iter_agents(agent_class=Rabbit):
if i.state_id == i.dead.id:
continue
if self.prob(prob_death):
self.info("I killed a rabbit: {}".format(i.id))
rabbits_alive -= 1
i.die()
self.debug("Rabbits alive: {}".format(rabbits_alive))
sim = Simulation(model=RabbitsImprovedEnv, max_time=100, seed="MySeed", iterations=1)
if __name__ == "__main__":
sim.run()

View File

@@ -1,23 +0,0 @@
---
load_module: rabbit_agents
name: rabbits_example
max_time: 500
interval: 1
seed: MySeed
agent_type: RabbitModel
environment_agents:
- agent_type: RandomAccident
environment_params:
prob_death: 0.001
default_state:
mating_prob: 0.01
topology:
nodes:
- id: 1
state:
gender: female
- id: 0
state:
gender: male
directed: true
links: []

View File

@@ -0,0 +1,161 @@
from soil import FSM, state, default_state, BaseAgent, NetworkAgent, Environment, Simulation, report, parameters as params
from collections import Counter
import logging
import math
class RabbitEnv(Environment):
prob_death: params.probability = 1e-100
def init(self):
a1 = self.add_node(Male)
a2 = self.add_node(Female)
a1.add_edge(a2)
self.add_agent(RandomAccident)
@report
@property
def num_rabbits(self):
return self.count_agents(agent_class=Rabbit)
@report
@property
def num_males(self):
return self.count_agents(agent_class=Male)
@report
@property
def num_females(self):
return self.count_agents(agent_class=Female)
class Rabbit(NetworkAgent, FSM):
sexual_maturity = 30
life_expectancy = 300
@default_state
@state
def newborn(self):
self.info("I am a newborn.")
self.age = 0
self.offspring = 0
return self.youngling
@state
def youngling(self):
self.age += 1
if self.age >= self.sexual_maturity:
self.info(f"I am fertile! My age is {self.age}")
return self.fertile
@state
def fertile(self):
raise Exception("Each subclass should define its fertile state")
@state
def dead(self):
self.die()
class Male(Rabbit):
max_females = 5
mating_prob = 0.001
@state
def fertile(self):
self.age += 1
if self.age > self.life_expectancy:
return self.dead
# Males try to mate
for f in self.model.agents(
agent_class=Female, state_id=Female.fertile.id, limit=self.max_females
):
self.debug("FOUND A FEMALE: ", repr(f), self.mating_prob)
if self.prob(self["mating_prob"]):
f.impregnate(self)
break # Take a break
class Female(Rabbit):
gestation = 10
pregnancy = -1
@state
def fertile(self):
# Just wait for a Male
self.age += 1
if self.age > self.life_expectancy:
return self.dead
if self.pregnancy >= 0:
return self.pregnant
def impregnate(self, male):
self.info(f"impregnated by {repr(male)}")
self.mate = male
self.pregnancy = 0
self.number_of_babies = int(8 + 4 * self.random.random())
@state
def pregnant(self):
self.info("I am pregnant")
self.age += 1
if self.age >= self.life_expectancy:
return self.die()
if self.pregnancy < self.gestation:
self.pregnancy += 1
return
self.info("Having {} babies".format(self.number_of_babies))
for i in range(self.number_of_babies):
state = {}
agent_class = self.random.choice([Male, Female])
child = self.model.add_node(agent_class=agent_class, **state)
child.add_edge(self)
try:
child.add_edge(self.mate)
self.model.agents[self.mate].offspring += 1
except ValueError:
self.debug("The father has passed away")
self.offspring += 1
self.mate = None
self.pregnancy = -1
return self.fertile
def die(self):
if "pregnancy" in self and self["pregnancy"] > -1:
self.info("A mother has died carrying a baby!!")
return super().die()
class RandomAccident(BaseAgent):
def step(self):
rabbits_alive = self.model.G.number_of_nodes()
if not rabbits_alive:
return self.die()
prob_death = self.model.prob_death * math.floor(
math.log10(max(1, rabbits_alive))
)
self.debug("Killing some rabbits with prob={}!".format(prob_death))
for i in self.get_agents(agent_class=Rabbit):
if i.state_id == i.dead.id:
continue
if self.prob(prob_death):
self.info("I killed a rabbit: {}".format(i.id))
rabbits_alive -= 1
i.die()
self.debug("Rabbits alive: {}".format(rabbits_alive))
sim = Simulation(model=RabbitEnv, max_time=100, seed="MySeed", iterations=1)
if __name__ == "__main__":
sim.run()

View File

@@ -0,0 +1,47 @@
"""
Example of setting a
Example of a fully programmatic simulation, without definition files.
"""
from soil import Simulation, agents, Environment
from soil.time import Delta
class MyAgent(agents.FSM):
"""
An agent that first does a ping
"""
defaults = {"pong_counts": 2}
@agents.default_state
@agents.state
def ping(self):
self.info("Ping")
return self.pong, Delta(self.random.expovariate(1 / 16))
@agents.state
def pong(self):
self.info("Pong")
self.pong_counts -= 1
self.info(str(self.pong_counts))
if self.pong_counts < 1:
return self.die()
return None, Delta(self.random.expovariate(1 / 16))
class RandomEnv(Environment):
def init(self):
self.add_agent(agent_class=MyAgent)
s = Simulation(
name="Programmatic",
model=RandomEnv,
iterations=1,
max_time=100,
dump=False,
)
envs = s.run()

View File

@@ -0,0 +1,341 @@
import networkx as nx
from soil.agents import Geo, NetworkAgent, FSM, custom, state, default_state
from soil import Environment, Simulation
from soil.parameters import *
from soil.utils import int_seed
class TerroristEnvironment(Environment):
n: Integer = 100
radius: Float = 0.2
information_spread_intensity: probability = 0.7
terrorist_additional_influence: probability = 0.03
terrorist_additional_influence: probability = 0.035
max_vulnerability: probability = 0.7
prob_interaction: probability = 0.5
# TrainingAreaModel and HavenModel
training_influence: probability = 0.20
haven_influence: probability = 0.20
# TerroristNetworkModel
vision_range: Float = 0.30
sphere_influence: Integer = 2
weight_social_distance: Float = 0.035
weight_link_distance: Float = 0.035
ratio_civil: probability = 0.8
ratio_leader: probability = 0.1
ratio_training: probability = 0.05
ratio_haven: probability = 0.05
def init(self):
self.create_network(generator=self.generator, n=self.n, radius=self.radius)
self.populate_network([
TerroristNetworkModel.w(state_id='civilian'),
TerroristNetworkModel.w(state_id='leader'),
TrainingAreaModel,
HavenModel
], [self.ratio_civil, self.ratio_leader, self.ratio_training, self.ratio_haven])
def generator(self, *args, **kwargs):
return nx.random_geometric_graph(*args, **kwargs, seed=int_seed(self._seed))
class TerroristSpreadModel(FSM, Geo):
"""
Settings:
information_spread_intensity
terrorist_additional_influence
min_vulnerability (optional else zero)
max_vulnerability
"""
information_spread_intensity = 0.1
terrorist_additional_influence = 0.1
min_vulnerability = 0
max_vulnerability = 1
def init(self):
if self.state_id == self.civilian.id: # Civilian
self.mean_belief = self.model.random.uniform(0.00, 0.5)
elif self.state_id == self.terrorist.id: # Terrorist
self.mean_belief = self.random.uniform(0.8, 1.00)
elif self.state_id == self.leader.id: # Leader
self.mean_belief = 1.00
else:
raise Exception("Invalid state id: {}".format(self["id"]))
self.vulnerability = self.random.uniform(
self.get("min_vulnerability", 0), self.get("max_vulnerability", 1)
)
@default_state
@state
def civilian(self):
neighbours = list(self.get_neighbors(agent_class=TerroristSpreadModel))
if len(neighbours) > 0:
# Only interact with some of the neighbors
interactions = list(
n for n in neighbours if self.random.random() <= self.model.prob_interaction
)
influence = sum(self.degree(i) for i in interactions)
mean_belief = sum(
i.mean_belief * self.degree(i) / influence for i in interactions
)
mean_belief = (
mean_belief * self.information_spread_intensity
+ self.mean_belief * (1 - self.information_spread_intensity)
)
self.mean_belief = mean_belief * self.vulnerability + self.mean_belief * (
1 - self.vulnerability
)
if self.mean_belief >= 0.8:
return self.terrorist
@state
def leader(self):
self.mean_belief = self.mean_belief ** (1 - self.terrorist_additional_influence)
for neighbour in self.get_neighbors(
state_id=[self.terrorist.id, self.leader.id]
):
if self.betweenness(neighbour) > self.betweenness(self):
return self.terrorist
@state
def terrorist(self):
neighbours = self.get_agents(
state_id=[self.terrorist.id, self.leader.id],
agent_class=TerroristSpreadModel,
limit_neighbors=True,
)
if len(neighbours) > 0:
influence = sum(self.degree(n) for n in neighbours)
mean_belief = sum(
n.mean_belief * self.degree(n) / influence for n in neighbours
)
mean_belief = mean_belief * self.vulnerability + self.mean_belief * (
1 - self.vulnerability
)
self.mean_belief = self.mean_belief ** (
1 - self.terrorist_additional_influence
)
# Check if there are any leaders in the group
leaders = list(filter(lambda x: x.state_id == self.leader.id, neighbours))
if not leaders:
# Check if this is the potential leader
# Stop once it's found. Otherwise, set self as leader
for neighbour in neighbours:
if self.betweenness(self) < self.betweenness(neighbour):
return
return self.leader
def ego_search(self, steps=1, center=False, agent=None, **kwargs):
"""Get a list of nodes in the ego network of *node* of radius *steps*"""
node = agent.node_id if agent else self.node_id
G = self.subgraph(**kwargs)
return nx.ego_graph(G, node, center=center, radius=steps).nodes()
def degree(self, agent, force=False):
if (
force
or (not hasattr(self.model, "_degree"))
or getattr(self.model, "_last_step", 0) < self.now
):
self.model._degree = nx.degree_centrality(self.G)
self.model._last_step = self.now
return self.model._degree[agent.node_id]
def betweenness(self, agent, force=False):
if (
force
or (not hasattr(self.model, "_betweenness"))
or getattr(self.model, "_last_step", 0) < self.now
):
self.model._betweenness = nx.betweenness_centrality(self.G)
self.model._last_step = self.now
return self.model._betweenness[agent.node_id]
class TrainingAreaModel(FSM, Geo):
"""
Settings:
training_influence
min_vulnerability
Requires TerroristSpreadModel.
"""
training_influence = 0.1
min_vulnerability = 0
def init(self):
self.mean_believe = 1
self.vulnerability = 0
@default_state
@state
def terrorist(self):
for neighbour in self.get_neighbors(agent_class=TerroristSpreadModel):
if neighbour.vulnerability > self.min_vulnerability:
neighbour.vulnerability = neighbour.vulnerability ** (
1 - self.training_influence
)
class HavenModel(FSM, Geo):
"""
Settings:
haven_influence
min_vulnerability
max_vulnerability
Requires TerroristSpreadModel.
"""
min_vulnerability = 0
haven_influence = 0.1
max_vulnerability = 0.5
def init(self):
self.mean_believe = 0
self.vulnerability = 0
def get_occupants(self, **kwargs):
return self.get_neighbors(agent_class=TerroristSpreadModel,
**kwargs)
@default_state
@state
def civilian(self):
civilians = self.get_occupants(state_id=self.civilian.id)
if not civilians:
return self.terrorist
for neighbour in self.get_occupants():
if neighbour.vulnerability > self.min_vulnerability:
neighbour.vulnerability = neighbour.vulnerability * (
1 - self.haven_influence
)
return self.civilian
@state
def terrorist(self):
for neighbour in self.get_occupants():
if neighbour.vulnerability < self.max_vulnerability:
neighbour.vulnerability = neighbour.vulnerability ** (
1 - self.haven_influence
)
return self.terrorist
class TerroristNetworkModel(TerroristSpreadModel):
"""
Settings:
sphere_influence
vision_range
weight_social_distance
weight_link_distance
"""
sphere_influence: float = 1
vision_range: float = 1
weight_social_distance: float = 0.5
weight_link_distance: float = 0.2
@state
def terrorist(self):
self.update_relationships()
return super().terrorist()
@state
def leader(self):
self.update_relationships()
return super().leader()
def update_relationships(self):
if self.count_neighbors(state_id=self.civilian.id) == 0:
close_ups = set(
self.geo_search(
radius=self.vision_range, agent_class=TerroristNetworkModel
)
)
step_neighbours = set(
self.ego_search(
self.sphere_influence,
agent_class=TerroristNetworkModel,
center=False,
)
)
neighbours = set(
agent.unique_id
for agent in self.get_neighbors(agent_class=TerroristNetworkModel)
)
search = (close_ups | step_neighbours) - neighbours
for agent in self.get_agents(search):
social_distance = 1 / self.shortest_path_length(agent.unique_id)
spatial_proximity = 1 - self.get_distance(agent.unique_id)
prob_new_interaction = (
self.weight_social_distance * social_distance
+ self.weight_link_distance * spatial_proximity
)
if (
agent.state_id == "civilian"
and self.random.random() < prob_new_interaction
):
self.add_edge(agent)
break
def get_distance(self, target):
source_x, source_y = nx.get_node_attributes(self.G, "pos")[self.unique_id]
target_x, target_y = nx.get_node_attributes(self.G, "pos")[target]
dx = abs(source_x - target_x)
dy = abs(source_y - target_y)
return (dx**2 + dy**2) ** (1 / 2)
def shortest_path_length(self, target):
try:
return nx.shortest_path_length(self.G, self.unique_id, target)
except nx.NetworkXNoPath:
return float("inf")
sim = Simulation(
model=TerroristEnvironment,
iterations=1,
name="TerroristNetworkModel_sim",
max_steps=150,
seed="default2",
skip_test=False,
dump=False,
)
# TODO: integrate visualization
# visualization_params:
# # Icons downloaded from https://www.iconfinder.com/
# shape_property: agent
# shapes:
# TrainingAreaModel: target
# HavenModel: home
# TerroristNetworkModel: person
# colors:
# - attr_id: civilian
# color: '#40de40'
# - attr_id: terrorist
# color: red
# - attr_id: leader
# color: '#c16a6a'
# background_image: 'map_4800x2860.jpg'
# background_opacity: '0.9'
# background_filter_color: 'blue'

View File

@@ -1,14 +0,0 @@
---
name: torvalds_example
max_time: 10
interval: 2
agent_type: CounterModel
default_state:
skill_level: 'beginner'
network_params:
path: 'torvalds.edgelist'
states:
Torvalds:
skill_level: 'God'
balkian:
skill_level: 'developer'

25
examples/torvalds_sim.py Normal file
View File

@@ -0,0 +1,25 @@
from soil import Environment, Simulation, CounterModel, report
# Get directory path for current file
import os, sys, inspect
currentdir = os.path.dirname(os.path.abspath(inspect.getfile(inspect.currentframe())))
class TorvaldsEnv(Environment):
def init(self):
self.create_network(path=os.path.join(currentdir, 'torvalds.edgelist'))
self.populate_network(CounterModel, skill_level='beginner')
self.agent(node_id="Torvalds").skill_level = 'God'
self.agent(node_id="balkian").skill_level = 'developer'
self.add_agent_reporter("times")
@report
def god_developers(self):
return self.count_agents(skill_level='God')
sim = Simulation(name='torvalds_example',
max_steps=10,
interval=2,
model=TorvaldsEnv)

View File

@@ -12330,11 +12330,11 @@ Notice how node 0 is the only one with a TV.</p>
<span class="n">sim</span> <span class="o">=</span> <span class="n">soil</span><span class="o">.</span><span class="n">Simulation</span><span class="p">(</span><span class="n">topology</span><span class="o">=</span><span class="n">G</span><span class="p">,</span>
<span class="n">num_trials</span><span class="o">=</span><span class="mi">1</span><span class="p">,</span>
<span class="n">max_time</span><span class="o">=</span><span class="n">MAX_TIME</span><span class="p">,</span>
<span class="n">environment_agents</span><span class="o">=</span><span class="p">[{</span><span class="s1">&#39;agent_type&#39;</span><span class="p">:</span> <span class="n">NewsEnvironmentAgent</span><span class="p">,</span>
<span class="n">environment_agents</span><span class="o">=</span><span class="p">[{</span><span class="s1">&#39;agent_class&#39;</span><span class="p">:</span> <span class="n">NewsEnvironmentAgent</span><span class="p">,</span>
<span class="s1">&#39;state&#39;</span><span class="p">:</span> <span class="p">{</span>
<span class="s1">&#39;event_time&#39;</span><span class="p">:</span> <span class="n">EVENT_TIME</span>
<span class="p">}}],</span>
<span class="n">network_agents</span><span class="o">=</span><span class="p">[{</span><span class="s1">&#39;agent_type&#39;</span><span class="p">:</span> <span class="n">NewsSpread</span><span class="p">,</span>
<span class="n">network_agents</span><span class="o">=</span><span class="p">[{</span><span class="s1">&#39;agent_class&#39;</span><span class="p">:</span> <span class="n">NewsSpread</span><span class="p">,</span>
<span class="s1">&#39;weight&#39;</span><span class="p">:</span> <span class="mi">1</span><span class="p">}],</span>
<span class="n">states</span><span class="o">=</span><span class="p">{</span><span class="mi">0</span><span class="p">:</span> <span class="p">{</span><span class="s1">&#39;has_tv&#39;</span><span class="p">:</span> <span class="kc">True</span><span class="p">}},</span>
<span class="n">default_state</span><span class="o">=</span><span class="p">{</span><span class="s1">&#39;has_tv&#39;</span><span class="p">:</span> <span class="kc">False</span><span class="p">},</span>
@@ -12468,14 +12468,14 @@ For this demo, we will use a python dictionary:</p>
<span class="p">},</span>
<span class="s1">&#39;network_agents&#39;</span><span class="p">:</span> <span class="p">[</span>
<span class="p">{</span>
<span class="s1">&#39;agent_type&#39;</span><span class="p">:</span> <span class="n">NewsSpread</span><span class="p">,</span>
<span class="s1">&#39;agent_class&#39;</span><span class="p">:</span> <span class="n">NewsSpread</span><span class="p">,</span>
<span class="s1">&#39;weight&#39;</span><span class="p">:</span> <span class="mi">1</span><span class="p">,</span>
<span class="s1">&#39;state&#39;</span><span class="p">:</span> <span class="p">{</span>
<span class="s1">&#39;has_tv&#39;</span><span class="p">:</span> <span class="kc">False</span>
<span class="p">}</span>
<span class="p">},</span>
<span class="p">{</span>
<span class="s1">&#39;agent_type&#39;</span><span class="p">:</span> <span class="n">NewsSpread</span><span class="p">,</span>
<span class="s1">&#39;agent_class&#39;</span><span class="p">:</span> <span class="n">NewsSpread</span><span class="p">,</span>
<span class="s1">&#39;weight&#39;</span><span class="p">:</span> <span class="mi">2</span><span class="p">,</span>
<span class="s1">&#39;state&#39;</span><span class="p">:</span> <span class="p">{</span>
<span class="s1">&#39;has_tv&#39;</span><span class="p">:</span> <span class="kc">True</span>
@@ -12483,7 +12483,7 @@ For this demo, we will use a python dictionary:</p>
<span class="p">}</span>
<span class="p">],</span>
<span class="s1">&#39;environment_agents&#39;</span><span class="p">:[</span>
<span class="p">{</span><span class="s1">&#39;agent_type&#39;</span><span class="p">:</span> <span class="n">NewsEnvironmentAgent</span><span class="p">,</span>
<span class="p">{</span><span class="s1">&#39;agent_class&#39;</span><span class="p">:</span> <span class="n">NewsEnvironmentAgent</span><span class="p">,</span>
<span class="s1">&#39;state&#39;</span><span class="p">:</span> <span class="p">{</span>
<span class="s1">&#39;event_time&#39;</span><span class="p">:</span> <span class="mi">10</span>
<span class="p">}</span>

File diff suppressed because one or more lines are too long

View File

@@ -1,7 +1,13 @@
nxsim
simpy
networkx>=2.0
networkx>=2.5
numpy
matplotlib
pyyaml
pandas
pyyaml>=5.1
pandas>=1
SALib>=1.3
Jinja2
Mesa>=1.2
pydantic>=1.9
sqlalchemy>=1.4
typing-extensions>=4.4
annotated-types>=0.4
tqdm>=4.64

View File

@@ -1,3 +1,7 @@
[metadata]
long_description = file: README.md
long_description_content_type = text/markdown
[aliases]
test=pytest
[tool:pytest]

View File

@@ -16,6 +16,12 @@ def parse_requirements(filename):
install_reqs = parse_requirements("requirements.txt")
test_reqs = parse_requirements("test-requirements.txt")
extras_require={
'mesa': ['mesa>=0.8.9'],
'geo': ['scipy>=1.3'],
'web': ['tornado']
}
extras_require['all'] = [dep for package in extras_require.values() for dep in package]
setup(
@@ -38,17 +44,20 @@ setup(
'Operating System :: MacOS :: MacOS X',
'Operating System :: Microsoft :: Windows',
'Operating System :: POSIX',
'Programming Language :: Python :: 3'],
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
],
install_requires=install_reqs,
extras_require={
'web': ['tornado']
},
extras_require=extras_require,
tests_require=test_reqs,
setup_requires=['pytest-runner', ],
pytest_plugins = ['pytest_profiling'],
include_package_data=True,
python_requires=">=3.8",
entry_points={
'console_scripts':
['soil = soil.__init__:main',
['soil = soil.__main__:main',
'soil-web = soil.web.__init__:main']
})

View File

@@ -1 +1 @@
0.13.0
1.0.0rc1

View File

@@ -1,8 +1,12 @@
from __future__ import annotations
import importlib
from importlib.resources import path
import sys
import os
import pdb
import logging
import traceback
from contextlib import contextmanager
from .version import __version__
@@ -11,65 +15,273 @@ try:
except NameError:
basestring = str
logging.basicConfig()
from pathlib import Path
from .analysis import *
from .agents import *
from . import agents
from .simulation import *
from .environment import Environment
from . import utils
from . import analysis
from .environment import Environment, EventedEnvironment
from .datacollection import SoilCollector
from . import serialization
from .utils import logger
from .time import *
from .decorators import *
def main(
cfg="simulation.yml",
exporters=None,
num_processes=1,
output="soil_output",
*,
debug=False,
pdb=False,
**kwargs,
):
sim = None
if isinstance(cfg, Simulation):
sim = cfg
def main():
import argparse
from . import simulation
parser = argparse.ArgumentParser(description='Run a SOIL simulation')
parser.add_argument('file', type=str,
nargs="?",
default='simulation.yml',
help='python module containing the simulation configuration.')
parser.add_argument('--module', '-m', type=str,
help='file containing the code of any custom agents.')
parser.add_argument('--dry-run', '--dry', action='store_true',
help='Do not store the results of the simulation.')
parser.add_argument('--pdb', action='store_true',
help='Use a pdb console in case of exception.')
parser.add_argument('--graph', '-g', action='store_true',
help='Dump GEXF graph. Defaults to false.')
parser.add_argument('--csv', action='store_true',
help='Dump history in CSV format. Defaults to false.')
parser.add_argument('--output', '-o', type=str, default="soil_output",
help='folder to write results to. It defaults to the current directory.')
parser.add_argument('--synchronous', action='store_true',
help='Run trials serially and synchronously instead of in parallel. Defaults to false.')
logger.info("Running SOIL version: {}".format(__version__))
parser = argparse.ArgumentParser(description="Run a SOIL simulation")
parser.add_argument(
"file",
type=str,
nargs="?",
default=cfg if sim is None else "",
help="Configuration file for the simulation (e.g., YAML or JSON)",
)
parser.add_argument(
"--version", action="store_true", help="Show version info and exit"
)
parser.add_argument(
"--module",
"-m",
type=str,
help="file containing the code of any custom agents.",
)
parser.add_argument(
"--dry-run",
"--dry",
action="store_true",
help="Do not run the simulation",
)
parser.add_argument(
"--no-dump",
action="store_true",
help="Do not store the results of the simulation to disk, show in terminal instead.",
)
parser.add_argument(
"--pdb", action="store_true", help="Use a pdb console in case of exception."
)
parser.add_argument(
"--debug",
action="store_true",
help="Run a customized version of a pdb console to debug a simulation.",
)
parser.add_argument(
"--graph",
"-g",
action="store_true",
help="Dump each iteration's network topology as a GEXF graph. Defaults to false.",
)
parser.add_argument(
"--csv",
action="store_true",
help="Dump all data collected in CSV format. Defaults to false.",
)
parser.add_argument("--level", type=str, help="Logging level")
parser.add_argument(
"--output",
"-o",
type=str,
default=output or "soil_output",
help="folder to write results to. It defaults to the current directory.",
)
parser.add_argument(
"--num-processes",
default=num_processes,
help="Number of processes to use for parallel execution. Defaults to 1.",
)
parser.add_argument(
"-e",
"--exporter",
action="append",
default=[],
help="Export environment and/or simulations using this exporter",
)
parser.add_argument(
"--max_time",
default="-1",
help="Set maximum time for the simulation to run. ",
)
parser.add_argument(
"--max_steps",
default="-1",
help="Set maximum number of steps for the simulation to run.",
)
parser.add_argument(
"--iterations",
default="",
help="Set maximum number of iterations (runs) for the simulation.",
)
parser.add_argument(
"--seed",
default=None,
help="Manually set a seed for the simulation.",
)
parser.add_argument(
"--only-convert",
"--convert",
action="store_true",
help="Do not run the simulation, only convert the configuration file(s) and output them.",
)
parser.add_argument(
"--set",
metavar="KEY=VALUE",
action="append",
help="Set a number of parameters that will be passed to the simulation."
"(do not put spaces before or after the = sign). "
"If a value contains spaces, you should define "
"it with double quotes: "
'foo="this is a sentence". Note that '
"values are always treated as strings.",
)
args = parser.parse_args()
level = getattr(logging, (args.level or "INFO").upper())
logger.setLevel(level)
if args.version:
return
exporters = exporters or [
"default",
]
for exp in args.exporter:
if exp not in exporters:
exporters.append(exp)
if args.csv:
exporters.append("csv")
if args.graph:
exporters.append("gexf")
if os.getcwd() not in sys.path:
sys.path.append(os.getcwd())
if args.module:
importlib.import_module(args.module)
if output is None:
output = args.output
logging.info('Loading config file: {}'.format(args.file))
debug = debug or args.debug
if args.pdb or debug:
args.synchronous = True
os.environ["SOIL_POSTMORTEM"] = "true"
res = []
try:
dump = []
if not args.dry_run:
if args.csv:
dump.append('csv')
if args.graph:
dump.append('gexf')
simulation.run_from_config(args.file,
dry_run=args.dry_run,
dump=dump,
parallel=(not args.synchronous and not args.pdb),
results_dir=args.output)
except Exception:
exp_params = {}
opts = dict(
dry_run=args.dry_run,
dump=not args.no_dump,
debug=debug,
exporters=exporters,
num_processes=args.num_processes,
level=level,
outdir=output,
exporter_params=exp_params,
**kwargs)
if args.seed is not None:
opts["seed"] = args.seed
if args.iterations:
opts["iterations"] =int(args.iterations)
if sim:
logger.info("Loading simulation instance")
for (k, v) in opts.items():
setattr(sim, k, v)
sims = [sim]
else:
logger.info("Loading config file: {}".format(args.file))
if not os.path.exists(args.file):
logger.error("Please, input a valid file")
return
assert opts["debug"] == debug
sims = list(
simulation.iter_from_file(
args.file,
**opts,
)
)
for sim in sims:
assert sim.debug == debug
if args.set:
for s in args.set:
k, v = s.split("=", 1)[:2]
v = eval(v)
tail, *head = k.rsplit(".", 1)[::-1]
target = sim.parameters
if head:
for part in head[0].split("."):
try:
target = getattr(target, part)
except AttributeError:
target = target[part]
try:
setattr(target, tail, v)
except AttributeError:
target[tail] = v
if args.only_convert:
print(sim.to_yaml())
continue
max_time = float(args.max_time) if args.max_time != "-1" else None
max_steps = float(args.max_steps) if args.max_steps != "-1" else None
res.append(sim.run(max_time=max_time, max_steps=max_steps))
except Exception as ex:
if args.pdb:
pdb.post_mortem()
from .debugging import post_mortem
print(traceback.format_exc())
post_mortem()
else:
raise
if debug:
from .debugging import set_trace
os.environ["SOIL_DEBUG"] = "true"
set_trace()
return res
if __name__ == '__main__':
@contextmanager
def easy(cfg, pdb=False, debug=False, **kwargs):
try:
return main(cfg, debug=debug, pdb=pdb, **kwargs)[0]
except Exception as e:
if os.environ.get("SOIL_POSTMORTEM"):
from .debugging import post_mortem
print(traceback.format_exc())
post_mortem()
raise
if __name__ == "__main__":
main()

View File

@@ -1,4 +1,9 @@
from . import main
from . import main as init_main
if __name__ == '__main__':
main()
def main():
init_main()
if __name__ == "__main__":
init_main()

View File

@@ -1,40 +1,31 @@
import random
from . import BaseAgent
from . import FSM, state, default_state
class BassModel(BaseAgent):
class BassModel(FSM):
"""
Settings:
innovation_prob
imitation_prob
"""
def __init__(self, environment, agent_id, state):
super().__init__(environment=environment, agent_id=agent_id, state=state)
env_params = environment.environment_params
self.state['sentimentCorrelation'] = 0
sentimentCorrelation = 0
def step(self):
self.behaviour()
def behaviour(self):
# Outside effects
if random.random() < self.state_params['innovation_prob']:
if self.state['id'] == 0:
self.state['id'] = 1
self.state['sentimentCorrelation'] = 1
else:
pass
return
# Imitation effects
if self.state['id'] == 0:
aware_neighbors = self.get_neighboring_agents(state_id=1)
@default_state
@state
def innovation(self):
if self.prob(self.innovation_prob):
self.sentimentCorrelation = 1
return self.aware
else:
aware_neighbors = self.get_neighbors(state_id=self.aware.id)
num_neighbors_aware = len(aware_neighbors)
if random.random() < (self.state_params['imitation_prob']*num_neighbors_aware):
self.state['id'] = 1
self.state['sentimentCorrelation'] = 1
if self.prob((self.imitation_prob * num_neighbors_aware)):
self.sentimentCorrelation = 1
return self.aware
else:
pass
@state
def aware(self):
self.die()

View File

@@ -1,102 +0,0 @@
import random
from . import BaseAgent
class BigMarketModel(BaseAgent):
"""
Settings:
Names:
enterprises [Array]
tweet_probability_enterprises [Array]
Users:
tweet_probability_users
tweet_relevant_probability
tweet_probability_about [Array]
sentiment_about [Array]
"""
def __init__(self, environment=None, agent_id=0, state=()):
super().__init__(environment=environment, agent_id=agent_id, state=state)
self.enterprises = environment.environment_params['enterprises']
self.type = ""
self.number_of_enterprises = len(environment.environment_params['enterprises'])
if self.id < self.number_of_enterprises: # Enterprises
self.state['id'] = self.id
self.type = "Enterprise"
self.tweet_probability = environment.environment_params['tweet_probability_enterprises'][self.id]
else: # normal users
self.state['id'] = self.number_of_enterprises
self.type = "User"
self.tweet_probability = environment.environment_params['tweet_probability_users']
self.tweet_relevant_probability = environment.environment_params['tweet_relevant_probability']
self.tweet_probability_about = environment.environment_params['tweet_probability_about'] # List
self.sentiment_about = environment.environment_params['sentiment_about'] # List
def step(self):
if self.id < self.number_of_enterprises: # Enterprise
self.enterpriseBehaviour()
else: # Usuario
self.userBehaviour()
for i in range(self.number_of_enterprises): # So that it never is set to 0 if there are not changes (logs)
self.attrs['sentiment_enterprise_%s'% self.enterprises[i]] = self.sentiment_about[i]
def enterpriseBehaviour(self):
if random.random() < self.tweet_probability: # Tweets
aware_neighbors = self.get_neighboring_agents(state_id=self.number_of_enterprises) # Nodes neighbour users
for x in aware_neighbors:
if random.uniform(0,10) < 5:
x.sentiment_about[self.id] += 0.1 # Increments for enterprise
else:
x.sentiment_about[self.id] -= 0.1 # Decrements for enterprise
# Establecemos limites
if x.sentiment_about[self.id] > 1:
x.sentiment_about[self.id] = 1
if x.sentiment_about[self.id]< -1:
x.sentiment_about[self.id] = -1
x.attrs['sentiment_enterprise_%s'% self.enterprises[self.id]] = x.sentiment_about[self.id]
def userBehaviour(self):
if random.random() < self.tweet_probability: # Tweets
if random.random() < self.tweet_relevant_probability: # Tweets something relevant
# Tweet probability per enterprise
for i in range(self.number_of_enterprises):
random_num = random.random()
if random_num < self.tweet_probability_about[i]:
# The condition is fulfilled, sentiments are evaluated towards that enterprise
if self.sentiment_about[i] < 0:
# NEGATIVO
self.userTweets("negative",i)
elif self.sentiment_about[i] == 0:
# NEUTRO
pass
else:
# POSITIVO
self.userTweets("positive",i)
def userTweets(self,sentiment,enterprise):
aware_neighbors = self.get_neighboring_agents(state_id=self.number_of_enterprises) # Nodes neighbours users
for x in aware_neighbors:
if sentiment == "positive":
x.sentiment_about[enterprise] +=0.003
elif sentiment == "negative":
x.sentiment_about[enterprise] -=0.003
else:
pass
# Establecemos limites
if x.sentiment_about[enterprise] > 1:
x.sentiment_about[enterprise] = 1
if x.sentiment_about[enterprise] < -1:
x.sentiment_about[enterprise] = -1
x.attrs['sentiment_enterprise_%s'% self.enterprises[enterprise]] = x.sentiment_about[enterprise]

View File

@@ -1,32 +1,46 @@
from . import BaseAgent
from . import BaseAgent, NetworkAgent
class CounterModel(BaseAgent):
class Ticker(BaseAgent):
times = 0
def step(self):
self.times += 1
class CounterModel(NetworkAgent):
"""
Dummy behaviour. It counts the number of nodes in the network and neighbors
in each step and adds it to its state.
"""
times = 0
neighbors = 0
total = 0
def step(self):
# Outside effects
total = len(list(self.get_all_agents()))
neighbors = len(list(self.get_neighboring_agents()))
self['times'] = self.get('times', 0) + 1
self['neighbors'] = neighbors
self['total'] = total
total = len(list(self.model.schedule._agents))
neighbors = len(list(self.get_neighbors()))
self["times"] = self.get("times", 0) + 1
self["neighbors"] = neighbors
self["total"] = total
class AggregatedCounter(BaseAgent):
class AggregatedCounter(NetworkAgent):
"""
Dummy behaviour. It counts the number of nodes in the network and neighbors
in each step and adds it to its state.
"""
times = 0
neighbors = 0
total = 0
def step(self):
# Outside effects
total = len(list(self.get_all_agents()))
neighbors = len(list(self.get_neighboring_agents()))
self['times'] = self.get('times', 0) + 1
self['neighbors'] = self.get('neighbors', 0) + neighbors
self['total'] = total = self.get('total', 0) + total
self.debug('Running for step: {}. Total: {}'.format(self.now, total))
self["times"] += 1
neighbors = len(list(self.get_neighbors()))
self["neighbors"] += neighbors
total = len(list(self.model.schedule.agents))
self["total"] += total
self.debug("Running for step: {}. Total: {}".format(self.now, total))

View File

@@ -1,18 +0,0 @@
from . import BaseAgent
import os.path
import matplotlib
import matplotlib.pyplot as plt
import networkx as nx
class DrawingAgent(BaseAgent):
"""
Agent that draws the state of the network.
"""
def step(self):
# Outside effects
f = plt.figure()
nx.draw(self.env.G, node_size=10, width=0.2, pos=nx.spring_layout(self.env.G, scale=100), ax=f.add_subplot(111))
f.savefig(os.path.join(self.env.get_path(), "graph-"+str(self.env.now)+".png"))

21
soil/agents/Geo.py Normal file
View File

@@ -0,0 +1,21 @@
from scipy.spatial import cKDTree as KDTree
import networkx as nx
from . import NetworkAgent
class Geo(NetworkAgent):
"""In this type of network, nodes have a "pos" attribute."""
def geo_search(self, radius, center=False, **kwargs):
"""Get a list of nodes whose coordinates are closer than *radius* to *node*."""
node = self.node_id
G = self.subgraph(**kwargs)
pos = nx.get_node_attributes(G, "pos")
if not pos:
return []
nodes, coords = list(zip(*pos.items()))
kdtree = KDTree(coords) # Cannot provide generator.
indices = kdtree.query_ball_point(pos[node], radius)
return [nodes[i] for i in indices if center or (nodes[i] != node)]

View File

@@ -1,8 +1,7 @@
import random
from . import BaseAgent
from . import Agent, state, default_state
class IndependentCascadeModel(BaseAgent):
class IndependentCascadeModel(Agent):
"""
Settings:
innovation_prob
@@ -10,40 +9,22 @@ class IndependentCascadeModel(BaseAgent):
imitation_prob
"""
def __init__(self, environment=None, agent_id=0, state=()):
super().__init__(environment=environment, agent_id=agent_id, state=state)
self.innovation_prob = environment.environment_params['innovation_prob']
self.imitation_prob = environment.environment_params['imitation_prob']
self.state['time_awareness'] = 0
self.state['sentimentCorrelation'] = 0
time_awareness = 0
sentimentCorrelation = 0
def step(self):
self.behaviour()
# Outside effects
@default_state
@state
def outside(self):
if self.prob(self.model.innovation_prob):
self.sentimentCorrelation = 1
self.time_awareness = self.model.now # To know when they have been infected
return self.imitate
def behaviour(self):
aware_neighbors_1_time_step = []
# Outside effects
if random.random() < self.innovation_prob:
if self.state['id'] == 0:
self.state['id'] = 1
self.state['sentimentCorrelation'] = 1
self.state['time_awareness'] = self.env.now # To know when they have been infected
else:
pass
@state
def imitate(self):
aware_neighbors = self.get_neighbors(state_id=1, time_awareness=self.now-1)
return
# Imitation effects
if self.state['id'] == 0:
aware_neighbors = self.get_neighboring_agents(state_id=1)
for x in aware_neighbors:
if x.state['time_awareness'] == (self.env.now-1):
aware_neighbors_1_time_step.append(x)
num_neighbors_aware = len(aware_neighbors_1_time_step)
if random.random() < (self.imitation_prob*num_neighbors_aware):
self.state['id'] = 1
self.state['sentimentCorrelation'] = 1
else:
pass
return
if self.prob(self.model.imitation_prob * len(aware_neighbors)):
self.sentimentCorrelation = 1
return self.outside

View File

@@ -1,242 +0,0 @@
import random
import numpy as np
from . import BaseAgent
class SpreadModelM2(BaseAgent):
"""
Settings:
prob_neutral_making_denier
prob_infect
prob_cured_healing_infected
prob_cured_vaccinate_neutral
prob_vaccinated_healing_infected
prob_vaccinated_vaccinate_neutral
prob_generate_anti_rumor
"""
def __init__(self, environment=None, agent_id=0, state=()):
super().__init__(environment=environment, agent_id=agent_id, state=state)
self.prob_neutral_making_denier = np.random.normal(environment.environment_params['prob_neutral_making_denier'],
environment.environment_params['standard_variance'])
self.prob_infect = np.random.normal(environment.environment_params['prob_infect'],
environment.environment_params['standard_variance'])
self.prob_cured_healing_infected = np.random.normal(environment.environment_params['prob_cured_healing_infected'],
environment.environment_params['standard_variance'])
self.prob_cured_vaccinate_neutral = np.random.normal(environment.environment_params['prob_cured_vaccinate_neutral'],
environment.environment_params['standard_variance'])
self.prob_vaccinated_healing_infected = np.random.normal(environment.environment_params['prob_vaccinated_healing_infected'],
environment.environment_params['standard_variance'])
self.prob_vaccinated_vaccinate_neutral = np.random.normal(environment.environment_params['prob_vaccinated_vaccinate_neutral'],
environment.environment_params['standard_variance'])
self.prob_generate_anti_rumor = np.random.normal(environment.environment_params['prob_generate_anti_rumor'],
environment.environment_params['standard_variance'])
def step(self):
if self.state['id'] == 0: # Neutral
self.neutral_behaviour()
elif self.state['id'] == 1: # Infected
self.infected_behaviour()
elif self.state['id'] == 2: # Cured
self.cured_behaviour()
elif self.state['id'] == 3: # Vaccinated
self.vaccinated_behaviour()
def neutral_behaviour(self):
# Infected
infected_neighbors = self.get_neighboring_agents(state_id=1)
if len(infected_neighbors) > 0:
if random.random() < self.prob_neutral_making_denier:
self.state['id'] = 3 # Vaccinated making denier
def infected_behaviour(self):
# Neutral
neutral_neighbors = self.get_neighboring_agents(state_id=0)
for neighbor in neutral_neighbors:
if random.random() < self.prob_infect:
neighbor.state['id'] = 1 # Infected
def cured_behaviour(self):
# Vaccinate
neutral_neighbors = self.get_neighboring_agents(state_id=0)
for neighbor in neutral_neighbors:
if random.random() < self.prob_cured_vaccinate_neutral:
neighbor.state['id'] = 3 # Vaccinated
# Cure
infected_neighbors = self.get_neighboring_agents(state_id=1)
for neighbor in infected_neighbors:
if random.random() < self.prob_cured_healing_infected:
neighbor.state['id'] = 2 # Cured
def vaccinated_behaviour(self):
# Cure
infected_neighbors = self.get_neighboring_agents(state_id=1)
for neighbor in infected_neighbors:
if random.random() < self.prob_cured_healing_infected:
neighbor.state['id'] = 2 # Cured
# Vaccinate
neutral_neighbors = self.get_neighboring_agents(state_id=0)
for neighbor in neutral_neighbors:
if random.random() < self.prob_cured_vaccinate_neutral:
neighbor.state['id'] = 3 # Vaccinated
# Generate anti-rumor
infected_neighbors_2 = self.get_neighboring_agents(state_id=1)
for neighbor in infected_neighbors_2:
if random.random() < self.prob_generate_anti_rumor:
neighbor.state['id'] = 2 # Cured
class ControlModelM2(BaseAgent):
"""
Settings:
prob_neutral_making_denier
prob_infect
prob_cured_healing_infected
prob_cured_vaccinate_neutral
prob_vaccinated_healing_infected
prob_vaccinated_vaccinate_neutral
prob_generate_anti_rumor
"""
def __init__(self, environment=None, agent_id=0, state=()):
super().__init__(environment=environment, agent_id=agent_id, state=state)
self.prob_neutral_making_denier = np.random.normal(environment.environment_params['prob_neutral_making_denier'],
environment.environment_params['standard_variance'])
self.prob_infect = np.random.normal(environment.environment_params['prob_infect'],
environment.environment_params['standard_variance'])
self.prob_cured_healing_infected = np.random.normal(environment.environment_params['prob_cured_healing_infected'],
environment.environment_params['standard_variance'])
self.prob_cured_vaccinate_neutral = np.random.normal(environment.environment_params['prob_cured_vaccinate_neutral'],
environment.environment_params['standard_variance'])
self.prob_vaccinated_healing_infected = np.random.normal(environment.environment_params['prob_vaccinated_healing_infected'],
environment.environment_params['standard_variance'])
self.prob_vaccinated_vaccinate_neutral = np.random.normal(environment.environment_params['prob_vaccinated_vaccinate_neutral'],
environment.environment_params['standard_variance'])
self.prob_generate_anti_rumor = np.random.normal(environment.environment_params['prob_generate_anti_rumor'],
environment.environment_params['standard_variance'])
def step(self):
if self.state['id'] == 0: # Neutral
self.neutral_behaviour()
elif self.state['id'] == 1: # Infected
self.infected_behaviour()
elif self.state['id'] == 2: # Cured
self.cured_behaviour()
elif self.state['id'] == 3: # Vaccinated
self.vaccinated_behaviour()
elif self.state['id'] == 4: # Beacon-off
self.beacon_off_behaviour()
elif self.state['id'] == 5: # Beacon-on
self.beacon_on_behaviour()
def neutral_behaviour(self):
self.state['visible'] = False
# Infected
infected_neighbors = self.get_neighboring_agents(state_id=1)
if len(infected_neighbors) > 0:
if random.random() < self.prob_neutral_making_denier:
self.state['id'] = 3 # Vaccinated making denier
def infected_behaviour(self):
# Neutral
neutral_neighbors = self.get_neighboring_agents(state_id=0)
for neighbor in neutral_neighbors:
if random.random() < self.prob_infect:
neighbor.state['id'] = 1 # Infected
self.state['visible'] = False
def cured_behaviour(self):
self.state['visible'] = True
# Vaccinate
neutral_neighbors = self.get_neighboring_agents(state_id=0)
for neighbor in neutral_neighbors:
if random.random() < self.prob_cured_vaccinate_neutral:
neighbor.state['id'] = 3 # Vaccinated
# Cure
infected_neighbors = self.get_neighboring_agents(state_id=1)
for neighbor in infected_neighbors:
if random.random() < self.prob_cured_healing_infected:
neighbor.state['id'] = 2 # Cured
def vaccinated_behaviour(self):
self.state['visible'] = True
# Cure
infected_neighbors = self.get_neighboring_agents(state_id=1)
for neighbor in infected_neighbors:
if random.random() < self.prob_cured_healing_infected:
neighbor.state['id'] = 2 # Cured
# Vaccinate
neutral_neighbors = self.get_neighboring_agents(state_id=0)
for neighbor in neutral_neighbors:
if random.random() < self.prob_cured_vaccinate_neutral:
neighbor.state['id'] = 3 # Vaccinated
# Generate anti-rumor
infected_neighbors_2 = self.get_neighboring_agents(state_id=1)
for neighbor in infected_neighbors_2:
if random.random() < self.prob_generate_anti_rumor:
neighbor.state['id'] = 2 # Cured
def beacon_off_behaviour(self):
self.state['visible'] = False
infected_neighbors = self.get_neighboring_agents(state_id=1)
if len(infected_neighbors) > 0:
self.state['id'] == 5 # Beacon on
def beacon_on_behaviour(self):
self.state['visible'] = False
# Cure (M2 feature added)
infected_neighbors = self.get_neighboring_agents(state_id=1)
for neighbor in infected_neighbors:
if random.random() < self.prob_generate_anti_rumor:
neighbor.state['id'] = 2 # Cured
neutral_neighbors_infected = neighbor.get_neighboring_agents(state_id=0)
for neighbor in neutral_neighbors_infected:
if random.random() < self.prob_generate_anti_rumor:
neighbor.state['id'] = 3 # Vaccinated
infected_neighbors_infected = neighbor.get_neighboring_agents(state_id=1)
for neighbor in infected_neighbors_infected:
if random.random() < self.prob_generate_anti_rumor:
neighbor.state['id'] = 2 # Cured
# Vaccinate
neutral_neighbors = self.get_neighboring_agents(state_id=0)
for neighbor in neutral_neighbors:
if random.random() < self.prob_cured_vaccinate_neutral:
neighbor.state['id'] = 3 # Vaccinated

Some files were not shown because too many files have changed in this diff Show More