1
0
mirror of https://github.com/gsi-upm/soil synced 2025-10-18 17:28:30 +00:00

Compare commits

..

29 Commits

Author SHA1 Message Date
J. Fernando Sánchez
eca4cae298 Update tutorial and fix plotting bug 2023-04-24 19:23:04 +02:00
J. Fernando Sánchez
47a67f6665 Add jupyter to test-requirements 2023-04-24 19:04:53 +02:00
J. Fernando Sánchez
c13550cf83 Tweak ipynb testing 2023-04-24 18:57:07 +02:00
J. Fernando Sánchez
55bbc76b2a Improve tutorial
* Improve text
* Move to docs
* Autogenerate with sphinx
* Fix naming issue `environment.run` (double name)
* Add to tests
2023-04-24 18:47:52 +02:00
J. Fernando Sánchez
d13e4eb4b9 Plot env without agent reporters 2023-04-24 18:06:37 +02:00
J. Fernando Sánchez
93d23e4cab Add tutorial test to CI/CD 2023-04-24 18:05:07 +02:00
J. Fernando Sánchez
3802578ad5 Use context manager to add source files 2023-04-24 17:40:00 +02:00
J. Fernando Sánchez
4e296e0cf1 Merge branch 'mesa' 2023-04-21 15:19:21 +02:00
J. Fernando Sánchez
302075a65d Fix bug Py3.11 2023-04-21 14:27:41 +02:00
J. Fernando Sánchez
fba379c97c Update readme 2023-04-21 13:31:00 +02:00
J. Fernando Sánchez
50bca88362 Fix pre-release version of v1.0.0rc1 2023-04-20 18:07:42 +02:00
J. Fernando Sánchez
cc238d84ec Pre-release version of v1.0 2023-04-20 17:57:18 +02:00
J. Fernando Sánchez
be65592055 Default parameters terroristnetwork 2023-04-14 20:25:16 +02:00
J. Fernando Sánchez
1d882dcff6 Update easy function 2023-04-14 20:21:34 +02:00
J. Fernando Sánchez
b3e77cbff5 Update python version in gitlab-ci 2023-04-14 20:07:16 +02:00
J. Fernando Sánchez
05748a3250 Update python version requirement 2023-04-14 20:03:47 +02:00
J. Fernando Sánchez
a3fc6a5efa Update README 2023-04-14 19:56:44 +02:00
J. Fernando Sánchez
4e95709188 Update README 2023-04-14 19:53:31 +02:00
J. Fernando Sánchez
feab0ba79e Large set of changes for v0.30
The examples weren't being properly tested in the last commit. When we fixed
that a lot of bugs in the new implementation of environment and agent were
found, which accounts for most of these changes.

The main difference is the mechanism to load simulations from a configuration
file. For that to work, we had to rework our module loading code in
`serialization` and add a `source_file` attribute to configurations (and
simulations, for that matter).
2023-04-14 19:41:24 +02:00
J. Fernando Sánchez
73282530fd Big refactor v0.30
All test pass, except for the TestConfig suite, which is not too critical as the
plan for this version onwards is to avoid configuration as much as possible.
2023-04-09 04:19:24 +02:00
J. Fernando Sánchez
bf481f0f88 v0.20.8 fix bugs 2023-03-23 14:49:09 +01:00
J. Fernando Sánchez
2869b1e1e6 Clean-up
* Removed old/unnecessary models
* Added a `simulation.{iter_}from_py` method to load simulations from python
files
* Changed tests of examples to run programmatic simulations
* Fixed programmatic examples
2022-11-13 20:31:05 +01:00
J. Fernando Sánchez
d3cee18635 Add seed to cars example 2022-10-20 14:47:28 +02:00
J. Fernando Sánchez
9a7b62e88e Release 0.30.0rc3 2022-10-20 14:12:34 +02:00
J. Fernando Sánchez
c09e480d37 black formatting 2022-10-20 14:12:10 +02:00
J. Fernando Sánchez
b2d48cb4df Add test cases for 'ASK' 2022-10-20 14:10:34 +02:00
J. Fernando Sánchez
a1262edd2a Refactored time
Treating time and conditions as the same entity was getting confusing, and it
added a lot of unnecessary abstraction in a critical part (the scheduler).

The scheduling queue now has the time as a floating number (faster), the agent
id (for ties) and the condition, as well as the agent. The first three
elements (time, id, condition) can be considered as the "key" for the event.

To allow for agent execution to be "randomized" within every step, a new
parameter has been added to the scheduler, which makes it add a random number to
the key in order to change the ordering.

`EventedAgent.received` now checks the messages before returning control to the
user by default.
2022-10-20 12:15:25 +02:00
J. Fernando Sánchez
cbbaf73538 Fix bug EventedEnvironment 2022-10-20 12:07:56 +02:00
J. Fernando Sánchez
2f5e5d0a74 Black formatting 2022-10-18 17:03:40 +02:00
129 changed files with 4883 additions and 92397 deletions

View File

@@ -1,5 +1,7 @@
**/soil_output
.*
**/.*
**/__pycache__
__pycache__
*.pyc
**/backup

3
.gitignore vendored
View File

@@ -8,4 +8,5 @@ soil_output
docs/_build*
build/*
dist/*
prof
prof
backup

View File

@@ -20,7 +20,7 @@ docker:
test:
tags:
- docker
image: python:3.7
image: python:3.8
stage: test
script:
- pip install -r requirements.txt -r test-requirements.txt
@@ -31,7 +31,7 @@ push_pypi:
- tags
tags:
- docker
image: python:3.7
image: python:3.8
stage: publish
script:
- echo $CI_COMMIT_TAG > soil/VERSION
@@ -44,7 +44,7 @@ check_pypi:
- tags
tags:
- docker
image: python:3.7
image: python:3.8
stage: check_published
script:
- pip install soil==$CI_COMMIT_TAG

View File

@@ -3,21 +3,37 @@ All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## [0.30 UNRELEASED]
## [1.0 UNRELEASED]
Version 1.0 introduced multiple changes, especially on the `Simulation` class and anything related to how configuration is handled.
For an explanation of the general changes in version 1.0, please refer to the file `docs/notes_v1.0.rst`.
### Added
* Simple debugging capabilities in `soil.debugging`, with a custom `pdb.Debugger` subclass that exposes commands to list agents and their status and set breakpoints on states (for FSM agents). Try it with `soil --debug <simulation file>`
* Ability to run
* Ability to
* The `soil.exporters` module to export the results of datacollectors (model.datacollector) into files at the end of trials/simulations
* A modular set of classes for environments/models. Now the ability to configure the agents through an agent definition and a topology through a network configuration is split into two classes (`soil.agents.BaseEnvironment` for agents, `soil.agents.NetworkEnvironment` to add topology).
* FSM agents can now have generators as states. They work similar to normal states, with one caveat. Only `time` values can be yielded, not a state. This is because the state will not change, it will be resumed after the yield, at the appropriate time. The return value *can* be a state, or a `(state, time)` tuple, just like in normal states.
* Environments now have a class method to make them easier to use without a simulation`.run`. Notice that this is different from `run_model`, which is an instance method.
* Ability to run simulations using mesa models
* The `soil.exporters` module to export the results of datacollectors (`model.datacollector`) into files at the end of trials/simulations
* Agents can now have generators as a step function or a state. They work similar to normal functions, with one caveat in the case of `FSM`: only `time` values (or None) can be yielded, not a state. This is because the state will not change, it will be resumed after the yield, at the appropriate time. The return value *can* be a state, or a `(state, time)` tuple, just like in normal states.
* Simulations can now specify a `matrix` with possible values for every simulation parameter. The final parameters will be calculated based on the `parameters` used and a cartesian product (i.e., all possible combinations) of each parameter.
* Simple debugging capabilities in `soil.debugging`, with a custom `pdb.Debugger` subclass that exposes commands to list agents and their status and set breakpoints on states (for FSM agents). Try it with `soil --debug <simulation file>`
### Changed
* Configuration schema is very different now. Check `soil.config` for more information. We are also using Pydantic for (de)serialization.
* There may be more than one topology/network in the simulation
* Ability
* Configuration schema (`Simulation`) is very simplified. All simulations should be checked
* Model / environment variables are expected (but not enforced) to be a single value. This is done to more closely align with mesa
* `Exporter.iteration_end` now takes two parameters: `env` (same as before) and `params` (specific parameters for this environment). We considered including a `parameters` attribute in the environment, but this would not be compatible with mesa.
* `num_trials` renamed to `iterations`
* General renaming of `trial` to `iteration`, to work better with `mesa`
* `model_parameters` renamed to `parameters` in simulation
* Simulation results for every iteration of a simulation with the same name are stored in a single `sqlite` database
### Removed
* Any `tsih` and `History` integration in the main classes. To record the state of environments/agents, just use a datacollector. In some cases this may be slower or consume more memory than the previous system. However, few cases actually used the full potential of the history, and it came at the cost of unnecessary complexity and worse performance for the majority of cases.
## [0.20.8]
### Changed
* Tsih bumped to version 0.1.8
### Fixed
* Mentions to `id` in docs. It should be `state_id` now.
* Fixed bug: environment agents were not being added to the simulation
## [0.20.7]
### Changed

View File

@@ -1,12 +1,52 @@
# [SOIL](https://github.com/gsi-upm/soil)
Soil is an extensible and user-friendly Agent-based Social Simulator for Social Networks.
Learn how to run your own simulations with our [documentation](http://soilsim.readthedocs.io).
Follow our [tutorial](examples/tutorial/soil_tutorial.ipynb) to develop your own agent models.
Follow our [tutorial](docs/tutorial/soil_tutorial.ipynb) to develop your own agent models.
> **Warning**
> Soil 1.0 introduced many fundamental changes. Check the [documention on how to update your simulations to work with newer versions](docs/notes_v1.0.rst)
## Features
* Integration with (social) networks (through `networkx`)
* Convenience functions and methods to easily assign agents to your model (and optionally to its network):
* Following a given distribution (e.g., 2 agents of type `Foo`, 10% of the network should be agents of type `Bar`)
* Based on the topology of the network
* **Several types of abstractions for agents**:
* Finite state machine, where methods can be turned into a state
* Network agents, which have convenience methods to access the model's topology
* Generator-based agents, whose state is paused though a `yield` and resumed on the next step
* **Reporting and data collection**:
* Soil models include data collection and record some data by default (# of agents, state of each agent, etc.)
* All data collected are exported by default to a SQLite database and a description file
* Options to export to other formats, such as CSV, or defining your own exporters
* A summary of the data collected is shown in the command line, for easy inspection
* **An event-based scheduler**
* Agents can be explicit about when their next time/step should be, and not all agents run in every step. This avoids unnecessary computation.
* Time intervals between each step are flexible.
* There are primitives to specify when the next execution of an agent should be (or conditions)
* **Actor-inspired** message-passing
* A simulation runner (`soil.Simulation`) that can:
* Run models in parallel
* Save results to different formats
* Simulation configuration files
* A command line interface (`soil`), to quickly run simulations with different parameters
* An integrated debugger (`soil --debug`) with custom functions to print agent states and break at specific states
## Mesa compatibility
SOIL has been redesigned to integrate well with [Mesa](https://github.com/projectmesa/mesa).
For instance, it should be possible to run a `mesa.Model` models using a `soil.Simulation` and the `soil` CLI, or to integrate the `soil.TimedActivation` scheduler on a `mesa.Model`.
Note that some combinations of `mesa` and `soil` components, while technically possible, are much less useful or might yield surprising results.
For instance, you may add any `soil.agent` agent on a regular `mesa.Model` with a vanilla scheduler from `mesa.time`.
But in that case the agents will not get any of the advanced event-based scheduling, and most agent behaviors that depend on that may not work.
# Changes in version 0.3
## Changes in version 0.3
Version 0.3 came packed with many changes to provide much better integration with MESA.
For a long time, we tried to keep soil backwards-compatible, but it turned out to be a big endeavour and the resulting code was less readable.
@@ -18,27 +58,6 @@ If you have an older Soil simulation, you have two options:
* Update the necessary configuration files and code. You may use the examples in the `examples` folder for reference, as well as the documentation.
* Keep using a previous `soil` version.
## Mesa compatibility
Soil is in the process of becoming fully compatible with MESA.
The idea is to provide a set of modular classes and functions that extend the functionality of mesa, whilst staying compatible.
In the end, it should be possible to add regular mesa agents to a soil simulation, or use a soil agent within a mesa simulation/model.
This is a non-exhaustive list of tasks to achieve compatibility:
- [ ] Integrate `soil.Simulation` with mesa's runners:
- [ ] `soil.Simulation` could mimic/become a `mesa.batchrunner`
- [ ] Integrate `soil.Environment` with `mesa.Model`:
- [x] `Soil.Environment` inherits from `mesa.Model`
- [x] `Soil.Environment` includes a Mesa-like Scheduler (see the `soil.time` module.
- [ ] Allow for `mesa.Model` to be used in a simulation.
- [ ] Integrate `soil.Agent` with `mesa.Agent`:
- [x] Rename agent.id to unique_id?
- [x] mesa agents can be used in soil simulations (see `examples/mesa`)
- [ ] Provide examples
- [ ] Using mesa modules in a soil simulation
- [ ] Using soil modules in a mesa simulation
- [ ] Document the new APIs and usage
## Citation

View File

@@ -31,7 +31,10 @@
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
# ones.
extensions = ['IPython.sphinxext.ipython_console_highlighting']
extensions = [
"IPython.sphinxext.ipython_console_highlighting",
"nbsphinx",
]
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
@@ -64,7 +67,7 @@ release = '0.1'
#
# This is also used if you do content translation via gettext catalogs.
# Usually you set "language" from the command line for these cases.
language = None
language = "en"
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
@@ -152,6 +155,3 @@ texinfo_documents = [
author, 'Soil', 'One line description of project.',
'Miscellaneous'),
]

View File

@@ -1,262 +0,0 @@
Configuring a simulation
------------------------
There are two ways to configure a simulation: programmatically and with a configuration file.
In both cases, the parameters used are the same.
The advantage of a configuration file is that it is a clean declarative description, and it makes it easier to reproduce.
Simulation configuration files can be formatted in ``json`` or ``yaml`` and they define all the parameters of a simulation.
Here's an example (``example.yml``).
.. literalinclude:: example.yml
:language: yaml
This example configuration will run three trials (``num_trials``) of a simulation containing a randomly generated network (``network_params``).
The 100 nodes in the network will be SISaModel agents (``network_agents.agent_class``), which is an agent behavior that is included in Soil.
10% of the agents (``weight=1``) will start in the content state, 10% in the discontent state, and the remaining 80% (``weight=8``) in the neutral state.
All agents will have access to the environment (``environment_params``), which only contains one variable, ``prob_infected``.
The state of the agents will be updated every 2 seconds (``interval``).
Now run the simulation with the command line tool:
.. code:: bash
soil example.yml
Once the simulation finishes, its results will be stored in a folder named ``MyExampleSimulation``.
Three types of objects are saved by default: a pickle of the simulation; a ``YAML`` representation of the simulation (which can be used to re-launch it); and for every trial, a ``sqlite`` file with the content of the state of every network node and the environment parameters at every step of the simulation.
.. code::
soil_output
└── MyExampleSimulation
├── MyExampleSimulation.dumped.yml
├── MyExampleSimulation.simulation.pickle
├── MyExampleSimulation_trial_0.db.sqlite
├── MyExampleSimulation_trial_1.db.sqlite
└── MyExampleSimulation_trial_2.db.sqlite
You may also ask soil to export the states in a ``csv`` file, and the network in gephi format (``gexf``).
Network
=======
The network topology for the simulation can be loaded from an existing network file or generated with one of the random network generation methods from networkx.
Loading a network
#################
To load an existing network, specify its path in the configuration:
.. code:: yaml
---
network_params:
path: /tmp/mynetwork.gexf
Soil will try to guess what networkx method to use to read the file based on its extension.
However, we only test using ``gexf`` files.
For simple networks, you may also include them in the configuration itself using , using the ``topology`` parameter like so:
.. code:: yaml
---
topology:
nodes:
- id: First
- id: Second
links:
- source: First
target: Second
Generating a random network
###########################
To generate a random network using one of networkx's built-in methods, specify the `graph generation algorithm <https://networkx.github.io/documentation/development/reference/generators.html>`_ and other parameters.
For example, the following configuration is equivalent to :code:`nx.complete_graph(n=100)`:
.. code:: yaml
network_params:
generator: complete_graph
n: 100
Environment
============
The environment is the place where the shared state of the simulation is stored.
That means both global parameters, such as the probability of disease outbreak.
But it also means other data, such as a map, or a network topology that connects multiple agents.
As a result, it is also typical to add custom functions in an environment that help agents interact with each other and with the state of the simulation.
Last but not least, an environment controls when and how its agents will be executed.
By default, soil environments incorporate a ``soil.time.TimedActivation`` model for agent execution (more on this on the following section).
Soil environments are very similar, and often interchangeable with, mesa models (``mesa.Model``).
A configuration may specify the initial value of the environment parameters:
.. code:: yaml
environment_params:
daily_probability_of_earthquake: 0.001
number_of_earthquakes: 0
All agents have access to the environment (and its parameters).
In some scenarios, it is useful to have a custom environment, to provide additional methods or to control the way agents update environment state.
For example, if our agents play the lottery, the environment could provide a method to decide whether the agent wins, instead of leaving it to the agent.
Agents
======
Agents are a way of modelling behavior.
Agents can be characterized with two variables: agent type (``agent_class``) and state.
The agent type is a ``soil.Agent`` class, which contains the code that encapsulates the behavior of the agent.
The state is a set of variables, which may change during the simulation, and that the code may use to control the behavior.
All agents provide a ``step`` method either explicitly or implicitly (by inheriting it from a superclass), which controls how the agent will behave in each step of the simulation.
When and how agent steps are executed in a simulation depends entirely on the ``environment``.
Most environments will internally use a scheduler (``mesa.time.BaseScheduler``), which controls the activation of agents.
In soil, we generally used the ``soil.time.TimedActivation`` scheduler, which allows agents to specify when their next activation will happen, defaulting to a
When an agent's step is executed (generally, every ``interval`` seconds), the agent has access to its state and the environment.
Through the environment, it can access the network topology and the state of other agents.
There are two types of agents according to how they are added to the simulation: network agents and environment agent.
Network Agents
##############
Network agents are attached to a node in the topology.
The configuration file allows you to specify how agents will be mapped to topology nodes.
The simplest way is to specify a single type of agent.
Hence, every node in the network will be associated to an agent of that type.
.. code:: yaml
agent_class: SISaModel
It is also possible to add more than one type of agent to the simulation.
To control the ratio of each type (using the ``weight`` property).
For instance, with following configuration, it is five times more likely for a node to be assigned a CounterModel type than a SISaModel type.
.. code:: yaml
network_agents:
- agent_class: SISaModel
weight: 1
- agent_class: CounterModel
weight: 5
The third option is to specify the type of agent on the node itself, e.g.:
.. code:: yaml
topology:
nodes:
- id: first
agent_class: BaseAgent
states:
first:
agent_class: SISaModel
This would also work with a randomly generated network:
.. code:: yaml
network:
generator: complete
n: 5
agent_class: BaseAgent
states:
- agent_class: SISaModel
In addition to agent type, you may add a custom initial state to the distribution.
This is very useful to add the same agent type with different states.
e.g., to populate the network with SISaModel, roughly 10% of them with a discontent state:
.. code:: yaml
network_agents:
- agent_class: SISaModel
weight: 9
state:
id: neutral
- agent_class: SISaModel
weight: 1
state:
id: discontent
Lastly, the configuration may include initial state for one or more nodes.
For instance, to add a state for the two nodes in this configuration:
.. code:: yaml
agent_class: SISaModel
network:
generator: complete_graph
n: 2
states:
- id: content
- id: discontent
Or to add state only to specific nodes (by ``id``).
For example, to apply special skills to Linux Torvalds in a simulation:
.. literalinclude:: ../examples/torvalds.yml
:language: yaml
Environment Agents
##################
In addition to network agents, more agents can be added to the simulation.
These agents are programmed in much the same way as network agents, the only difference is that they will not be assigned to network nodes.
.. code::
environment_agents:
- agent_class: MyAgent
state:
mood: happy
- agent_class: DummyAgent
You may use environment agents to model events that a normal agent cannot control, such as natural disasters or chance.
They are also useful to add behavior that has little to do with the network and the interactions within that network.
Templating
==========
Sometimes, it is useful to parameterize a simulation and run it over a range of values in order to compare each run and measure the effect of those parameters in the simulation.
For instance, you may want to run a simulation with different agent distributions.
This can be done in Soil using **templates**.
A template is a configuration where some of the values are specified with a variable.
e.g., ``weight: "{{ var1 }}"`` instead of ``weight: 1``.
There are two types of variables, depending on how their values are decided:
* Fixed. A list of values is provided, and a new simulation is run for each possible value. If more than a variable is given, a new simulation will be run per combination of values.
* Bounded/Sampled. The bounds of the variable are provided, along with a sampler method, which will be used to compute all the configuration combinations.
When fixed and bounded variables are mixed, Soil generates a new configuration per combination of fixed values and bounded values.
Here is an example with a single fixed variable and two bounded variable:
.. literalinclude:: ../examples/template.yml
:language: yaml

View File

@@ -3,33 +3,38 @@ name: MyExampleSimulation
max_time: 50
num_trials: 3
interval: 2
network_params:
generator: barabasi_albert_graph
n: 100
m: 2
network_agents:
model_params:
topology:
params:
generator: barabasi_albert_graph
n: 100
m: 2
agents:
distribution:
- agent_class: SISaModel
weight: 1
topology: True
ratio: 0.1
state:
id: content
state_id: content
- agent_class: SISaModel
weight: 1
topology: True
ratio: .1
state:
id: discontent
state_id: discontent
- agent_class: SISaModel
weight: 8
topology: True
ratio: 0.8
state:
id: neutral
environment_params:
prob_infect: 0.075
neutral_discontent_spon_prob: 0.1
neutral_discontent_infected_prob: 0.3
neutral_content_spon_prob: 0.3
neutral_content_infected_prob: 0.4
discontent_neutral: 0.5
discontent_content: 0.5
variance_d_c: 0.2
content_discontent: 0.2
variance_c_d: 0.2
content_neutral: 0.2
standard_variance: 1
state_id: neutral
prob_infect: 0.075
neutral_discontent_spon_prob: 0.1
neutral_discontent_infected_prob: 0.3
neutral_content_spon_prob: 0.3
neutral_content_infected_prob: 0.4
discontent_neutral: 0.5
discontent_content: 0.5
variance_d_c: 0.2
content_discontent: 0.2
variance_c_d: 0.2
content_neutral: 0.2
standard_variance: 1

View File

@@ -1,12 +1,21 @@
.. Soil documentation master file, created by
sphinx-quickstart on Tue Apr 25 12:48:56 2017.
You can adapt this file completely to your liking, but it should at least
contain the root `toctree` directive.
Welcome to Soil's documentation!
================================
Soil is an Agent-based Social Simulator in Python focused on Social Networks.
Soil is an opinionated Agent-based Social Simulator in Python focused on Social Networks.
To get started developing your own simulations and agent behaviors, check out our :doc:`Tutorial <soil_tutorial>` and the `examples on GitHub <https://github.com/gsi-upm/soil/tree/master/examples>`.
Soil can be installed through pip (see more details in the :doc:`installation` page):.
.. image:: soil.png
:width: 80%
:align: center
.. code:: bash
pip install soil
If you use Soil in your research, do not forget to cite this paper:
@@ -38,9 +47,8 @@ If you use Soil in your research, do not forget to cite this paper:
:caption: Learn more about soil:
installation
quickstart
configuration
Tutorial <soil_tutorial>
Tutorial <tutorial/soil_tutorial>
notes_v1.0
..

View File

@@ -1,7 +1,10 @@
Installation
------------
The easiest way to install Soil is through pip, with Python >= 3.4:
Through pip
===========
The easiest way to install Soil is through pip, with Python >= 3.8:
.. code:: bash
@@ -14,6 +17,10 @@ Now test that it worked by running the command line tool
soil --help
#or
python -m soil --help
Or, if you're using using soil programmatically:
.. code:: python
@@ -21,4 +28,38 @@ Or, if you're using using soil programmatically:
import soil
print(soil.__version__)
The latest version can be installed through `GitLab <https://lab.gsi.upm.es/soil/soil.git>`_ or `GitHub <https://github.com/gsi-upm/soil>`_.
Web UI
======
Soil also includes a web server that allows you to upload your simulations, change parameters, and visualize the results, including a timeline of the network.
To make it work, you have to install soil like this:
.. code::
pip install soil[web]
Once installed, the soil web UI can be run in two ways:
.. code::
soil-web
# OR
python -m soil.web
Development
===========
The latest version can be downloaded from `GitHub <https://github.com/gsi-upm/soil>`_ and installed manually:
.. code:: bash
git clone https://github.com/gsi-upm/soil
cd soil
python -m venv .venv
source .venv/bin/activate
pip install --editable .

View File

@@ -12,7 +12,7 @@ set BUILDDIR=_build
set SPHINXPROJ=Soil
if "%1" == "" goto help
eE
%SPHINXBUILD% >NUL 2>NUL
if errorlevel 9009 (
echo.

22
docs/mesa.rst Normal file
View File

@@ -0,0 +1,22 @@
Mesa compatibility
------------------
Soil is in the process of becoming fully compatible with MESA.
The idea is to provide a set of modular classes and functions that extend the functionality of mesa, whilst staying compatible.
In the end, it should be possible to add regular mesa agents to a soil simulation, or use a soil agent within a mesa simulation/model.
This is a non-exhaustive list of tasks to achieve compatibility:
- [ ] Integrate `soil.Simulation` with mesa's runners:
- [ ] `soil.Simulation` could mimic/become a `mesa.batchrunner`
- [ ] Integrate `soil.Environment` with `mesa.Model`:
- [x] `Soil.Environment` inherits from `mesa.Model`
- [x] `Soil.Environment` includes a Mesa-like Scheduler (see the `soil.time` module.
- [ ] Allow for `mesa.Model` to be used in a simulation.
- [ ] Integrate `soil.Agent` with `mesa.Agent`:
- [x] Rename agent.id to unique_id?
- [x] mesa agents can be used in soil simulations (see `examples/mesa`)
- [ ] Provide examples
- [ ] Using mesa modules in a soil simulation
- [ ] Using soil modules in a mesa simulation
- [ ] Document the new APIs and usage

38
docs/notes_v1.0.rst Normal file
View File

@@ -0,0 +1,38 @@
Upgrading to Soil 1.0
---------------------
What are the main changes in version 1.0?
#########################################
Version 1.0 is a major rewrite of the Soil system, focused on simplifying the API, aligning it with Mesa, and making it easier to use.
Unfortunately, this comes at the cost of backwards compatibility.
We drew several lessons from the previous version of Soil, and tried to address them in this version.
Mainly:
- The split between simulation configuration and simulation code was overly complicated for most use cases. As a result, most users ended up reusing configuration.
- Storing **all** the simulation data in a database is costly and unnecessary for most use cases. For most use cases, only a handful of variables need to be stored. This fits nicely with Mesa's data collection system.
- The API was too complex, and it was difficult to understand how to use it.
- Most parts of the API were not aligned with Mesa, which made it difficult to use Mesa's features or to integrate Soil modules with Mesa code, especially for newcomers.
- Many parts of the API were tightly coupled, which made it difficult to find bugs, test the system and add new features.
The 0.30 rewrite should provide a middle ground between Soil's opinionated approach and Mesa's flexibility.
The new Soil is less configuration-centric.
It aims to provide more modular and convenient functions, most of which can be used in vanilla Mesa.
How are agents assigned to nodes in the network
###############################################
The constructor of the `NetworkAgent` class has two arguments: `node_id` and `topology`.
If `topology` is not provided, it will default to `self.model.topology`.
This assignment might err if the model does not have a `topology` attribute, but most Soil environments derive from `NetworkEnvironment`, so they include a topology by default.
If `node_id` is not provided, a random node will be selected from the topology, until a node with no agent is found.
Then, the `node_id` of that node is assigned to the agent.
If no node with no agent is found, a new node is automatically added to the topology.
Can Soil environments include more than one network / topology?
###############################################################
Yes, but each network has to be included manually.
Somewhere between 0.20 and 0.30 we included the ability to include multiple networks, but it was deemed too complex and was removed.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 7.0 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 14 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 14 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 14 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 14 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 16 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 16 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 15 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 15 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 16 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 16 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 16 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 16 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 13 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 13 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 13 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 13 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 13 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 13 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 13 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 13 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 13 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 13 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 15 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 14 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 14 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 5.3 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 17 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 17 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 16 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 11 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 19 KiB

View File

@@ -1,93 +0,0 @@
Quickstart
----------
This section shows how to run your first simulation with Soil.
For installation instructions, see :doc:`installation`.
There are mainly two parts in a simulation: agent classes and simulation configuration.
An agent class defines how the agent will behave throughout the simulation.
The configuration includes things such as number of agents to use and their type, network topology to use, etc.
.. image:: soil.png
:width: 80%
:align: center
Soil includes several agent classes in the ``soil.agents`` module, and we will use them in this quickstart.
If you are interested in developing your own agents classes, see :doc:`soil_tutorial`.
Configuration
=============
To get you started, we will use this configuration (:download:`download the file <quickstart.yml>` directly):
.. literalinclude:: quickstart.yml
:language: yaml
The agent type used, SISa, is a very simple model.
It only has three states (neutral, content and discontent),
Its parameters are the probabilities to change from one state to another, either spontaneously or because of contagion from neighboring agents.
Running the simulation
======================
To see the simulation in action, simply point soil to the configuration, and tell it to store the graph and the history of agent states and environment parameters at every point.
.. code::
soil --graph --csv quickstart.yml [13:35:29]
INFO:soil:Using config(s): quickstart
INFO:soil:Dumping results to soil_output/quickstart : ['csv', 'gexf']
INFO:soil:Starting simulation quickstart at 13:35:30.
INFO:soil:Starting Simulation quickstart trial 0 at 13:35:30.
INFO:soil:Finished Simulation quickstart trial 0 at 13:35:49 in 19.43677067756653 seconds
INFO:soil:Starting Dumping simulation quickstart trial 0 at 13:35:49.
INFO:soil:Finished Dumping simulation quickstart trial 0 at 13:35:51 in 1.7733407020568848 seconds
INFO:soil:Dumping results to soil_output/quickstart
INFO:soil:Finished simulation quickstart at 13:35:51 in 21.29862952232361 seconds
The ``CSV`` file should look like this:
.. code::
agent_id,t_step,key,value
env,0,neutral_discontent_spon_prob,0.05
env,0,neutral_discontent_infected_prob,0.1
env,0,neutral_content_spon_prob,0.2
env,0,neutral_content_infected_prob,0.4
env,0,discontent_neutral,0.2
env,0,discontent_content,0.05
env,0,content_discontent,0.05
env,0,variance_d_c,0.05
env,0,variance_c_d,0.1
Results and visualization
=========================
The environment variables are marked as ``agent_id`` env.
Th exported values are only stored when they change.
To find out how to get every key and value at every point in the simulation, check out the :doc:`soil_tutorial`.
The dynamic graph is exported as a .gexf file which could be visualized with
`Gephi <https://gephi.org/users/download/>`__.
Now it is your turn to experiment with the simulation.
Change some of the parameters, such as the number of agents, the probability of becoming content, or the type of network, and see how the results change.
Soil also includes a web server that allows you to upload your simulations, change parameters, and visualize the results, including a timeline of the network.
To make it work, you have to install soil like this:
.. code::
pip install soil[web]
Once installed, the soil web UI can be run in two ways:
.. code::
soil-web
# OR
python -m soil.web

View File

@@ -1,30 +0,0 @@
---
name: quickstart
num_trials: 1
max_time: 1000
network_agents:
- agent_class: SISaModel
state:
id: neutral
weight: 1
- agent_class: SISaModel
state:
id: content
weight: 2
network_params:
n: 100
k: 5
p: 0.2
generator: newman_watts_strogatz_graph
environment_params:
neutral_discontent_spon_prob: 0.05
neutral_discontent_infected_prob: 0.1
neutral_content_spon_prob: 0.2
neutral_content_infected_prob: 0.4
discontent_neutral: 0.2
discontent_content: 0.05
content_discontent: 0.05
variance_d_c: 0.05
variance_c_d: 0.1
content_neutral: 0.1
standard_variance: 0.1

View File

@@ -1 +1,2 @@
ipython>=7.31.1
nbsphinx==0.9.1

File diff suppressed because it is too large Load Diff

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because it is too large Load Diff

View File

@@ -1,54 +0,0 @@
---
version: '2'
name: simple
group: tests
dir_path: "/tmp/"
num_trials: 3
max_steps: 100
interval: 1
seed: "CompleteSeed!"
model_class: Environment
model_params:
am_i_complete: true
topology:
params:
generator: complete_graph
n: 12
environment:
agents:
agent_class: CounterModel
topology: true
state:
times: 1
# In this group we are not specifying any topology
fixed:
- name: 'Environment Agent 1'
agent_class: BaseAgent
group: environment
topology: false
hidden: true
state:
times: 10
- agent_class: CounterModel
id: 0
group: fixed_counters
state:
times: 1
total: 0
- agent_class: CounterModel
group: fixed_counters
id: 1
distribution:
- agent_class: CounterModel
weight: 1
group: distro_counters
state:
times: 3
- agent_class: AggregatedCounter
weight: 0.2
override:
- filter:
agent_class: AggregatedCounter
n: 2
state:
times: 5

View File

@@ -1,16 +0,0 @@
---
name: custom-generator
description: Using a custom generator for the network
num_trials: 3
max_steps: 100
interval: 1
network_params:
generator: mymodule.mygenerator
# These are custom parameters
n: 10
n_edges: 5
network_agents:
- agent_class: CounterModel
weight: 1
state:
state_id: 0

View File

@@ -1,6 +1,7 @@
from networkx import Graph
import random
import networkx as nx
from soil import Simulation, Environment, CounterModel, parameters
def mygenerator(n=5, n_edges=5):
@@ -20,3 +21,19 @@ def mygenerator(n=5, n_edges=5):
n_out = random.choice(nodes)
G.add_edge(n_in, n_out)
return G
class GeneratorEnv(Environment):
"""Using a custom generator for the network"""
generator: parameters.function = staticmethod(mygenerator)
def init(self):
self.create_network(generator=self.generator, n=10, n_edges=5)
self.add_agents(CounterModel)
sim = Simulation(model=GeneratorEnv, max_steps=10, interval=1)
if __name__ == '__main__':
sim.run(dump=False)

View File

@@ -1,17 +1,17 @@
from soil.agents import FSM, state, default_state
from soil.time import Delta
class Fibonacci(FSM):
"""Agent that only executes in t_steps that are Fibonacci numbers"""
defaults = {"prev": 1}
prev = 1
@default_state
@state
def counting(self):
self.log("Stopping at {}".format(self.now))
prev, self["prev"] = self["prev"], max([self.now, self["prev"]])
return None, self.env.timeout(prev)
return None, Delta(prev)
class Odds(FSM):
@@ -21,18 +21,21 @@ class Odds(FSM):
@state
def odds(self):
self.log("Stopping at {}".format(self.now))
return None, self.env.timeout(1 + self.now % 2)
return None, Delta(1 + self.now % 2)
from soil import Environment, Simulation
from networkx import complete_graph
class TimeoutsEnv(Environment):
def init(self):
self.create_network(generator=complete_graph, n=2)
self.add_agent(agent_class=Fibonacci, node_id=0)
self.add_agent(agent_class=Odds, node_id=1)
sim = Simulation(model=TimeoutsEnv, max_steps=10, interval=1)
if __name__ == "__main__":
from soil import Simulation
s = Simulation(
network_agents=[
{"ids": [0], "agent_class": Fibonacci},
{"ids": [1], "agent_class": Odds},
],
network_params={"generator": "complete_graph", "n": 2},
max_time=100,
)
s.run(dry_run=True)
sim.run(dump=False)

View File

@@ -2,6 +2,8 @@ This example can be run like with command-line options, like this:
```bash
python cars.py --level DEBUG -e summary --csv
#or
soil cars.py -e summary
```
This will set the `CSV` (save the agent and model data to a CSV) and `summary` (print the a summary of the data to stdout) exporters, and set the log level to DEBUG.

View File

@@ -18,6 +18,7 @@ An example scenario could play like the following:
- If there are no more passengers available in the simulation, Drivers die
"""
from __future__ import annotations
from typing import Optional
from soil import *
from soil import events
from mesa.space import MultiGrid
@@ -28,17 +29,18 @@ from mesa.space import MultiGrid
@dataclass
class Journey:
"""
This represents a request for a journey. Passengers and drivers exchange this object.
This represents a request for a journey. Passengers and drivers exchange this object.
A journey may have a driver assigned or not. If the driver has not been assigned, this
object is considered a "request for a journey".
"""
origin: (int, int)
destination: (int, int)
tip: float
passenger: Passenger
driver: Driver = None
driver: Optional[Driver] = None
class City(EventedEnvironment):
@@ -54,28 +56,25 @@ class City(EventedEnvironment):
:param int height: Height of the internal grid
:param int width: Width of the internal grid
"""
def __init__(self, *args, n_cars=1, n_passengers=10,
height=100, width=100, agents=None,
model_reporters=None,
**kwargs):
self.grid = MultiGrid(width=width, height=height, torus=False)
if agents is None:
agents = []
for i in range(n_cars):
agents.append({'agent_class': Driver})
for i in range(n_passengers):
agents.append({'agent_class': Passenger})
model_reporters = model_reporters or {'earnings': 'total_earnings', 'n_passengers': 'number_passengers'}
print('REPORTERS', model_reporters)
super().__init__(*args, agents=agents, model_reporters=model_reporters, **kwargs)
n_cars = 1
n_passengers = 10
height = 100
width = 100
def init(self):
self.grid = MultiGrid(width=self.width, height=self.height, torus=False)
if not self.agents:
self.add_agents(Driver, k=self.n_cars)
self.add_agents(Passenger, k=self.n_passengers)
for agent in self.agents:
self.grid.place_agent(agent, (0, 0))
self.grid.move_to_empty(agent)
self.total_earnings = 0
self.add_model_reporter("total_earnings")
@property
def total_earnings(self):
return sum(d.earnings for d in self.agents(agent_class=Driver))
@report
@property
def number_passengers(self):
return self.count_agents(agent_class=Passenger)
@@ -87,32 +86,35 @@ class Driver(Evented, FSM):
earnings = 0
def on_receive(self, msg, sender):
'''This is not a state. It will run (and block) every time check_messages is invoked'''
"""This is not a state. It will run (and block) every time check_messages is invoked"""
if self.journey is None and isinstance(msg, Journey) and msg.driver is None:
msg.driver = self
self.journey = msg
def check_passengers(self):
'''If there are no more passengers, stop forever'''
"""If there are no more passengers, stop forever"""
c = self.count_agents(agent_class=Passenger)
self.info(f"Passengers left {c}")
self.debug(f"Passengers left {c}")
if not c:
self.die()
self.die("No more passengers")
@default_state
@state
def wandering(self):
'''Move around the city until a journey is accepted'''
"""Move around the city until a journey is accepted"""
target = None
self.check_passengers()
self.journey = None
while self.journey is None: # No potential journeys detected (see on_receive)
if target is None or not self.move_towards(target):
target = self.random.choice(self.model.grid.get_neighborhood(self.pos, moore=False))
target = self.random.choice(
self.model.grid.get_neighborhood(self.pos, moore=False)
)
self.check_passengers()
self.check_messages() # This will call on_receive behind the scenes, and the agent's status will be updated
yield Delta(30) # Wait at least 30 seconds before checking again
# This will call on_receive behind the scenes, and the agent's status will be updated
self.check_messages()
yield Delta(30) # Wait at least 30 seconds before checking again
try:
# Re-send the journey to the passenger, to confirm that we have been selected
@@ -126,18 +128,22 @@ class Driver(Evented, FSM):
@state
def driving(self):
'''The journey has been accepted. Pick them up and take them to their destination'''
"""The journey has been accepted. Pick them up and take them to their destination"""
self.info(f"Driving towards Passenger {self.journey.passenger.unique_id}")
while self.move_towards(self.journey.origin):
yield
self.info(f"Driving {self.journey.passenger.unique_id} from {self.journey.origin} to {self.journey.destination}")
while self.move_towards(self.journey.destination, with_passenger=True):
yield
self.info("Arrived at destination")
self.earnings += self.journey.tip
self.model.total_earnings += self.journey.tip
self.check_passengers()
return self.wandering
def move_towards(self, target, with_passenger=False):
'''Move one cell at a time towards a target'''
self.info(f"Moving { self.pos } -> { target }")
"""Move one cell at a time towards a target"""
self.debug(f"Moving { self.pos } -> { target }")
if target[0] == self.pos[0] and target[1] == self.pos[1]:
return False
@@ -151,55 +157,75 @@ class Driver(Evented, FSM):
break
self.model.grid.move_agent(self, tuple(next_pos))
if with_passenger:
self.journey.passenger.pos = self.pos # This could be communicated through messages
self.journey.passenger.pos = (
self.pos
) # This could be communicated through messages
return True
class Passenger(Evented, FSM):
pos = None
def on_receive(self, msg, sender):
'''This is not a state. It will be run synchronously every time `check_messages` is run'''
"""This is not a state. It will be run synchronously every time `check_messages` is run"""
if isinstance(msg, Journey):
self.journey = msg
return msg
@default_state
@state
def asking(self):
destination = (self.random.randint(0, self.model.grid.height), self.random.randint(0, self.model.grid.width))
destination = (
self.random.randint(0, self.model.grid.height-1),
self.random.randint(0, self.model.grid.width-1),
)
self.journey = None
journey = Journey(origin=self.pos,
destination=destination,
tip=self.random.randint(10, 100),
passenger=self)
journey = Journey(
origin=self.pos,
destination=destination,
tip=self.random.randint(10, 100),
passenger=self,
)
timeout = 60
expiration = self.now + timeout
self.info(f"Asking for journey at: { self.pos }")
self.model.broadcast(journey, ttl=timeout, sender=self, agent_class=Driver)
while not self.journey:
self.info(f"Passenger at: { self.pos }. Checking for responses.")
self.debug(f"Waiting for responses at: { self.pos }")
try:
# This will call check_messages behind the scenes, and the agent's status will be updated
# If you want to avoid that, you can call it with: check=False
yield self.received(expiration=expiration)
except events.TimedOut:
self.info(f"Passenger at: { self.pos }. Asking for journey.")
self.model.broadcast(journey, ttl=timeout, sender=self, agent_class=Driver)
self.info(f"Still no response. Waiting at: { self.pos }")
self.model.broadcast(
journey, ttl=timeout, sender=self, agent_class=Driver
)
expiration = self.now + timeout
self.check_messages()
self.info(f"Got a response! Waiting for driver")
return self.driving_home
@state
def driving_home(self):
while self.pos[0] != self.journey.destination[0] or self.pos[1] != self.journey.destination[1]:
yield self.received(timeout=60)
self.info("Got home safe!")
self.die()
while (
self.pos[0] != self.journey.destination[0]
or self.pos[1] != self.journey.destination[1]
):
try:
yield self.received(timeout=60)
except events.TimedOut:
pass
self.die("Got home safe!")
simulation = Simulation(name='RideHailing', model_class=City, model_params={'n_passengers': 2})
simulation = Simulation(name="RideHailing",
model=City,
seed="carsSeed",
max_time=1000,
parameters=dict(n_passengers=2))
if __name__ == "__main__":
with easy(simulation) as s:
s.run()
easy(simulation)

View File

@@ -1,19 +0,0 @@
---
name: mesa_sim
group: tests
dir_path: "/tmp"
num_trials: 3
max_steps: 100
interval: 1
seed: '1'
model_class: social_wealth.MoneyEnv
model_params:
generator: social_wealth.graph_generator
agents:
topology: true
distribution:
- agent_class: social_wealth.SocialMoneyAgent
weight: 1
N: 10
width: 50
height: 50

View File

@@ -0,0 +1,7 @@
from soil import Simulation
from social_wealth import MoneyEnv, graph_generator
sim = Simulation(name="mesa_sim", dump=False, max_steps=10, interval=2, model=MoneyEnv, parameters=dict(generator=graph_generator, N=10, width=50, height=50))
if __name__ == "__main__":
sim.run()

View File

@@ -1,5 +1,5 @@
from mesa.visualization.ModularVisualization import ModularServer
from soil.visualization import UserSettableParameter
from mesa.visualization.UserParam import Slider, Choice
from mesa.visualization.modules import ChartModule, NetworkModule, CanvasGrid
from social_wealth import MoneyEnv, graph_generator, SocialMoneyAgent
import networkx as nx
@@ -63,9 +63,8 @@ chart = ChartModule(
[{"Label": "Gini", "Color": "Black"}], data_collector_name="datacollector"
)
model_params = {
"N": UserSettableParameter(
"slider",
parameters = {
"N": Slider(
"N",
5,
1,
@@ -73,8 +72,7 @@ model_params = {
1,
description="Choose how many agents to include in the model",
),
"height": UserSettableParameter(
"slider",
"height": Slider(
"height",
5,
5,
@@ -82,8 +80,7 @@ model_params = {
1,
description="Grid height",
),
"width": UserSettableParameter(
"slider",
"width": Slider(
"width",
5,
5,
@@ -91,8 +88,7 @@ model_params = {
1,
description="Grid width",
),
"agent_class": UserSettableParameter(
"choice",
"agent_class": Choice(
"Agent class",
value="MoneyAgent",
choices=["MoneyAgent", "SocialMoneyAgent"],
@@ -102,13 +98,14 @@ model_params = {
canvas_element = CanvasGrid(
gridPortrayal, model_params["width"].value, model_params["height"].value, 500, 500
gridPortrayal, parameters["width"].value, parameters["height"].value, 500, 500
)
server = ModularServer(
MoneyEnv, [grid, chart, canvas_element], "Money Model", model_params
MoneyEnv, [grid, chart, canvas_element], "Money Model", parameters
)
server.port = 8521
server.launch(open_browser=False)
if __name__ == '__main__':
server.launch(open_browser=False)

View File

@@ -28,7 +28,7 @@ class MoneyAgent(MesaAgent):
It will only share wealth with neighbors based on grid proximity
"""
def __init__(self, unique_id, model, wealth=1):
def __init__(self, unique_id, model, wealth=1, **kwargs):
super().__init__(unique_id=unique_id, model=model)
self.wealth = wealth
@@ -53,7 +53,7 @@ class MoneyAgent(MesaAgent):
self.give_money()
class SocialMoneyAgent(NetworkAgent, MoneyAgent):
class SocialMoneyAgent(MoneyAgent, NetworkAgent):
wealth = 1
def give_money(self):

View File

@@ -2,13 +2,12 @@
"cells": [
{
"cell_type": "code",
"execution_count": 4,
"execution_count": 1,
"metadata": {
"ExecuteTime": {
"end_time": "2017-11-08T16:22:30.732107Z",
"start_time": "2017-11-08T17:22:30.059855+01:00"
},
"collapsed": true
}
},
"outputs": [],
"source": [
@@ -28,24 +27,16 @@
},
{
"cell_type": "code",
"execution_count": 5,
"execution_count": 2,
"metadata": {
"ExecuteTime": {
"end_time": "2017-11-08T16:22:35.580593Z",
"start_time": "2017-11-08T17:22:35.542745+01:00"
}
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Populating the interactive namespace from numpy and matplotlib\n"
]
}
],
"outputs": [],
"source": [
"%pylab inline\n",
"%matplotlib inline\n",
"\n",
"from soil import *"
]
@@ -66,7 +57,7 @@
},
{
"cell_type": "code",
"execution_count": 6,
"execution_count": 3,
"metadata": {
"ExecuteTime": {
"end_time": "2017-11-08T16:22:37.242327Z",
@@ -86,7 +77,7 @@
" prob_neighbor_spread: 0.0\r\n",
" prob_tv_spread: 0.01\r\n",
"interval: 1\r\n",
"max_time: 30\r\n",
"max_time: 300\r\n",
"name: Sim_all_dumb\r\n",
"network_agents:\r\n",
"- agent_class: DumbViewer\r\n",
@@ -110,7 +101,7 @@
" prob_neighbor_spread: 0.0\r\n",
" prob_tv_spread: 0.01\r\n",
"interval: 1\r\n",
"max_time: 30\r\n",
"max_time: 300\r\n",
"name: Sim_half_herd\r\n",
"network_agents:\r\n",
"- agent_class: DumbViewer\r\n",
@@ -142,18 +133,18 @@
" prob_neighbor_spread: 0.0\r\n",
" prob_tv_spread: 0.01\r\n",
"interval: 1\r\n",
"max_time: 30\r\n",
"max_time: 300\r\n",
"name: Sim_all_herd\r\n",
"network_agents:\r\n",
"- agent_class: HerdViewer\r\n",
" state:\r\n",
" has_tv: true\r\n",
" id: neutral\r\n",
" state_id: neutral\r\n",
" weight: 1\r\n",
"- agent_class: HerdViewer\r\n",
" state:\r\n",
" has_tv: true\r\n",
" id: neutral\r\n",
" state_id: neutral\r\n",
" weight: 1\r\n",
"network_params:\r\n",
" generator: barabasi_albert_graph\r\n",
@@ -169,13 +160,13 @@
" prob_tv_spread: 0.01\r\n",
" prob_neighbor_cure: 0.1\r\n",
"interval: 1\r\n",
"max_time: 30\r\n",
"max_time: 300\r\n",
"name: Sim_wise_herd\r\n",
"network_agents:\r\n",
"- agent_class: HerdViewer\r\n",
" state:\r\n",
" has_tv: true\r\n",
" id: neutral\r\n",
" state_id: neutral\r\n",
" weight: 1\r\n",
"- agent_class: WiseViewer\r\n",
" state:\r\n",
@@ -195,13 +186,13 @@
" prob_tv_spread: 0.01\r\n",
" prob_neighbor_cure: 0.1\r\n",
"interval: 1\r\n",
"max_time: 30\r\n",
"max_time: 300\r\n",
"name: Sim_all_wise\r\n",
"network_agents:\r\n",
"- agent_class: WiseViewer\r\n",
" state:\r\n",
" has_tv: true\r\n",
" id: neutral\r\n",
" state_id: neutral\r\n",
" weight: 1\r\n",
"- agent_class: WiseViewer\r\n",
" state:\r\n",
@@ -225,7 +216,7 @@
},
{
"cell_type": "code",
"execution_count": 22,
"execution_count": 4,
"metadata": {
"ExecuteTime": {
"end_time": "2017-11-08T18:07:46.781745Z",
@@ -233,7 +224,24 @@
},
"scrolled": true
},
"outputs": [],
"outputs": [
{
"ename": "ValueError",
"evalue": "No objects to concatenate",
"output_type": "error",
"traceback": [
"\u001b[0;31m---------------------------------------------------------------------------\u001b[0m",
"\u001b[0;31mValueError\u001b[0m Traceback (most recent call last)",
"Cell \u001b[0;32mIn[4], line 1\u001b[0m\n\u001b[0;32m----> 1\u001b[0m evodumb \u001b[38;5;241m=\u001b[39m \u001b[43manalysis\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mread_data\u001b[49m\u001b[43m(\u001b[49m\u001b[38;5;124;43m'\u001b[39;49m\u001b[38;5;124;43msoil_output/Sim_all_dumb/\u001b[39;49m\u001b[38;5;124;43m'\u001b[39;49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mprocess\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43manalysis\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mget_count\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mgroup\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[38;5;28;43;01mTrue\u001b[39;49;00m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mkeys\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43m[\u001b[49m\u001b[38;5;124;43m'\u001b[39;49m\u001b[38;5;124;43mid\u001b[39;49m\u001b[38;5;124;43m'\u001b[39;49m\u001b[43m]\u001b[49m\u001b[43m)\u001b[49m;\n",
"File \u001b[0;32m/mnt/data/home/j/git/lab.gsi/soil/soil/soil/analysis.py:14\u001b[0m, in \u001b[0;36mread_data\u001b[0;34m(group, *args, **kwargs)\u001b[0m\n\u001b[1;32m 12\u001b[0m iterable \u001b[38;5;241m=\u001b[39m _read_data(\u001b[38;5;241m*\u001b[39margs, \u001b[38;5;241m*\u001b[39m\u001b[38;5;241m*\u001b[39mkwargs)\n\u001b[1;32m 13\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m group:\n\u001b[0;32m---> 14\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[43mgroup_trials\u001b[49m\u001b[43m(\u001b[49m\u001b[43miterable\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m 15\u001b[0m \u001b[38;5;28;01melse\u001b[39;00m:\n\u001b[1;32m 16\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[38;5;28mlist\u001b[39m(iterable)\n",
"File \u001b[0;32m/mnt/data/home/j/git/lab.gsi/soil/soil/soil/analysis.py:201\u001b[0m, in \u001b[0;36mgroup_trials\u001b[0;34m(trials, aggfunc)\u001b[0m\n\u001b[1;32m 199\u001b[0m trials \u001b[38;5;241m=\u001b[39m \u001b[38;5;28mlist\u001b[39m(trials)\n\u001b[1;32m 200\u001b[0m trials \u001b[38;5;241m=\u001b[39m \u001b[38;5;28mlist\u001b[39m(\u001b[38;5;28mmap\u001b[39m(\u001b[38;5;28;01mlambda\u001b[39;00m x: x[\u001b[38;5;241m1\u001b[39m] \u001b[38;5;28;01mif\u001b[39;00m \u001b[38;5;28misinstance\u001b[39m(x, \u001b[38;5;28mtuple\u001b[39m) \u001b[38;5;28;01melse\u001b[39;00m x, trials))\n\u001b[0;32m--> 201\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[43mpd\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mconcat\u001b[49m\u001b[43m(\u001b[49m\u001b[43mtrials\u001b[49m\u001b[43m)\u001b[49m\u001b[38;5;241m.\u001b[39mgroupby(level\u001b[38;5;241m=\u001b[39m\u001b[38;5;241m0\u001b[39m)\u001b[38;5;241m.\u001b[39magg(aggfunc)\u001b[38;5;241m.\u001b[39mreorder_levels([\u001b[38;5;241m2\u001b[39m, \u001b[38;5;241m0\u001b[39m,\u001b[38;5;241m1\u001b[39m] ,axis\u001b[38;5;241m=\u001b[39m\u001b[38;5;241m1\u001b[39m)\n",
"File \u001b[0;32m/mnt/data/home/j/git/lab.gsi/soil/soil/.env-v0.20/lib/python3.8/site-packages/pandas/util/_decorators.py:331\u001b[0m, in \u001b[0;36mdeprecate_nonkeyword_arguments.<locals>.decorate.<locals>.wrapper\u001b[0;34m(*args, **kwargs)\u001b[0m\n\u001b[1;32m 325\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m \u001b[38;5;28mlen\u001b[39m(args) \u001b[38;5;241m>\u001b[39m num_allow_args:\n\u001b[1;32m 326\u001b[0m warnings\u001b[38;5;241m.\u001b[39mwarn(\n\u001b[1;32m 327\u001b[0m msg\u001b[38;5;241m.\u001b[39mformat(arguments\u001b[38;5;241m=\u001b[39m_format_argument_list(allow_args)),\n\u001b[1;32m 328\u001b[0m \u001b[38;5;167;01mFutureWarning\u001b[39;00m,\n\u001b[1;32m 329\u001b[0m stacklevel\u001b[38;5;241m=\u001b[39mfind_stack_level(),\n\u001b[1;32m 330\u001b[0m )\n\u001b[0;32m--> 331\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[43mfunc\u001b[49m\u001b[43m(\u001b[49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[43margs\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[43mkwargs\u001b[49m\u001b[43m)\u001b[49m\n",
"File \u001b[0;32m/mnt/data/home/j/git/lab.gsi/soil/soil/.env-v0.20/lib/python3.8/site-packages/pandas/core/reshape/concat.py:368\u001b[0m, in \u001b[0;36mconcat\u001b[0;34m(objs, axis, join, ignore_index, keys, levels, names, verify_integrity, sort, copy)\u001b[0m\n\u001b[1;32m 146\u001b[0m \u001b[38;5;129m@deprecate_nonkeyword_arguments\u001b[39m(version\u001b[38;5;241m=\u001b[39m\u001b[38;5;28;01mNone\u001b[39;00m, allowed_args\u001b[38;5;241m=\u001b[39m[\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mobjs\u001b[39m\u001b[38;5;124m\"\u001b[39m])\n\u001b[1;32m 147\u001b[0m \u001b[38;5;28;01mdef\u001b[39;00m \u001b[38;5;21mconcat\u001b[39m(\n\u001b[1;32m 148\u001b[0m objs: Iterable[NDFrame] \u001b[38;5;241m|\u001b[39m Mapping[HashableT, NDFrame],\n\u001b[0;32m (...)\u001b[0m\n\u001b[1;32m 157\u001b[0m copy: \u001b[38;5;28mbool\u001b[39m \u001b[38;5;241m=\u001b[39m \u001b[38;5;28;01mTrue\u001b[39;00m,\n\u001b[1;32m 158\u001b[0m ) \u001b[38;5;241m-\u001b[39m\u001b[38;5;241m>\u001b[39m DataFrame \u001b[38;5;241m|\u001b[39m Series:\n\u001b[1;32m 159\u001b[0m \u001b[38;5;250m \u001b[39m\u001b[38;5;124;03m\"\"\"\u001b[39;00m\n\u001b[1;32m 160\u001b[0m \u001b[38;5;124;03m Concatenate pandas objects along a particular axis.\u001b[39;00m\n\u001b[1;32m 161\u001b[0m \n\u001b[0;32m (...)\u001b[0m\n\u001b[1;32m 366\u001b[0m \u001b[38;5;124;03m 1 3 4\u001b[39;00m\n\u001b[1;32m 367\u001b[0m \u001b[38;5;124;03m \"\"\"\u001b[39;00m\n\u001b[0;32m--> 368\u001b[0m op \u001b[38;5;241m=\u001b[39m \u001b[43m_Concatenator\u001b[49m\u001b[43m(\u001b[49m\n\u001b[1;32m 369\u001b[0m \u001b[43m \u001b[49m\u001b[43mobjs\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 370\u001b[0m \u001b[43m \u001b[49m\u001b[43maxis\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43maxis\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 371\u001b[0m \u001b[43m \u001b[49m\u001b[43mignore_index\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mignore_index\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 372\u001b[0m \u001b[43m \u001b[49m\u001b[43mjoin\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mjoin\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 373\u001b[0m \u001b[43m \u001b[49m\u001b[43mkeys\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mkeys\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 374\u001b[0m \u001b[43m \u001b[49m\u001b[43mlevels\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mlevels\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 375\u001b[0m \u001b[43m \u001b[49m\u001b[43mnames\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mnames\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 376\u001b[0m \u001b[43m \u001b[49m\u001b[43mverify_integrity\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mverify_integrity\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 377\u001b[0m \u001b[43m \u001b[49m\u001b[43mcopy\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mcopy\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 378\u001b[0m \u001b[43m \u001b[49m\u001b[43msort\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43msort\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 379\u001b[0m \u001b[43m \u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m 381\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m op\u001b[38;5;241m.\u001b[39mget_result()\n",
"File \u001b[0;32m/mnt/data/home/j/git/lab.gsi/soil/soil/.env-v0.20/lib/python3.8/site-packages/pandas/core/reshape/concat.py:425\u001b[0m, in \u001b[0;36m_Concatenator.__init__\u001b[0;34m(self, objs, axis, join, keys, levels, names, ignore_index, verify_integrity, copy, sort)\u001b[0m\n\u001b[1;32m 422\u001b[0m objs \u001b[38;5;241m=\u001b[39m \u001b[38;5;28mlist\u001b[39m(objs)\n\u001b[1;32m 424\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m \u001b[38;5;28mlen\u001b[39m(objs) \u001b[38;5;241m==\u001b[39m \u001b[38;5;241m0\u001b[39m:\n\u001b[0;32m--> 425\u001b[0m \u001b[38;5;28;01mraise\u001b[39;00m \u001b[38;5;167;01mValueError\u001b[39;00m(\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mNo objects to concatenate\u001b[39m\u001b[38;5;124m\"\u001b[39m)\n\u001b[1;32m 427\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m keys \u001b[38;5;129;01mis\u001b[39;00m \u001b[38;5;28;01mNone\u001b[39;00m:\n\u001b[1;32m 428\u001b[0m objs \u001b[38;5;241m=\u001b[39m \u001b[38;5;28mlist\u001b[39m(com\u001b[38;5;241m.\u001b[39mnot_none(\u001b[38;5;241m*\u001b[39mobjs))\n",
"\u001b[0;31mValueError\u001b[0m: No objects to concatenate"
]
}
],
"source": [
"evodumb = analysis.read_data('soil_output/Sim_all_dumb/', process=analysis.get_count, group=True, keys=['id']);"
]
@@ -721,9 +729,9 @@
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"display_name": "venv-soil",
"language": "python",
"name": "python3"
"name": "venv-soil"
},
"language_info": {
"codemirror_mode": {
@@ -735,7 +743,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.2"
"version": "3.8.10"
},
"toc": {
"colors": {

View File

@@ -1,133 +0,0 @@
---
default_state: {}
environment_agents: []
environment_params:
prob_neighbor_spread: 0.0
prob_tv_spread: 0.01
interval: 1
max_steps: 300
name: Sim_all_dumb
network_agents:
- agent_class: newsspread.DumbViewer
state:
has_tv: false
weight: 1
- agent_class: newsspread.DumbViewer
state:
has_tv: true
weight: 1
network_params:
generator: barabasi_albert_graph
n: 500
m: 5
num_trials: 50
---
default_state: {}
environment_agents: []
environment_params:
prob_neighbor_spread: 0.0
prob_tv_spread: 0.01
interval: 1
max_steps: 300
name: Sim_half_herd
network_agents:
- agent_class: newsspread.DumbViewer
state:
has_tv: false
weight: 1
- agent_class: newsspread.DumbViewer
state:
has_tv: true
weight: 1
- agent_class: newsspread.HerdViewer
state:
has_tv: false
weight: 1
- agent_class: newsspread.HerdViewer
state:
has_tv: true
weight: 1
network_params:
generator: barabasi_albert_graph
n: 500
m: 5
num_trials: 50
---
default_state: {}
environment_agents: []
environment_params:
prob_neighbor_spread: 0.0
prob_tv_spread: 0.01
interval: 1
max_steps: 300
name: Sim_all_herd
network_agents:
- agent_class: newsspread.HerdViewer
state:
has_tv: true
state_id: neutral
weight: 1
- agent_class: newsspread.HerdViewer
state:
has_tv: true
state_id: neutral
weight: 1
network_params:
generator: barabasi_albert_graph
n: 500
m: 5
num_trials: 50
---
default_state: {}
environment_agents: []
environment_params:
prob_neighbor_spread: 0.0
prob_tv_spread: 0.01
prob_neighbor_cure: 0.1
interval: 1
max_steps: 300
name: Sim_wise_herd
network_agents:
- agent_class: newsspread.HerdViewer
state:
has_tv: true
state_id: neutral
weight: 1
- agent_class: newsspread.WiseViewer
state:
has_tv: true
weight: 1
network_params:
generator: barabasi_albert_graph
n: 500
m: 5
num_trials: 50
---
default_state: {}
environment_agents: []
environment_params:
prob_neighbor_spread: 0.0
prob_tv_spread: 0.01
prob_neighbor_cure: 0.1
interval: 1
max_steps: 300
name: Sim_all_wise
network_agents:
- agent_class: newsspread.WiseViewer
state:
has_tv: true
state_id: neutral
weight: 1
- agent_class: newsspread.WiseViewer
state:
has_tv: true
weight: 1
network_params:
generator: barabasi_albert_graph
n: 500
m: 5
network_params:
generator: barabasi_albert_graph
n: 500
m: 5
num_trials: 50

View File

@@ -1,87 +0,0 @@
from soil.agents import FSM, NetworkAgent, state, default_state, prob
import logging
class DumbViewer(FSM, NetworkAgent):
"""
A viewer that gets infected via TV (if it has one) and tries to infect
its neighbors once it's infected.
"""
prob_neighbor_spread = 0.5
prob_tv_spread = 0.1
has_been_infected = False
@default_state
@state
def neutral(self):
if self["has_tv"]:
if self.prob(self.model["prob_tv_spread"]):
return self.infected
if self.has_been_infected:
return self.infected
@state
def infected(self):
for neighbor in self.get_neighbors(state_id=self.neutral.id):
if self.prob(self.model["prob_neighbor_spread"]):
neighbor.infect()
def infect(self):
"""
This is not a state. It is a function that other agents can use to try to
infect this agent. DumbViewer always gets infected, but other agents like
HerdViewer might not become infected right away
"""
self.has_been_infected = True
class HerdViewer(DumbViewer):
"""
A viewer whose probability of infection depends on the state of its neighbors.
"""
def infect(self):
"""Notice again that this is NOT a state. See DumbViewer.infect for reference"""
infected = self.count_neighbors(state_id=self.infected.id)
total = self.count_neighbors()
prob_infect = self.model["prob_neighbor_spread"] * infected / total
self.debug("prob_infect", prob_infect)
if self.prob(prob_infect):
self.has_been_infected = True
class WiseViewer(HerdViewer):
"""
A viewer that can change its mind.
"""
defaults = {
"prob_neighbor_spread": 0.5,
"prob_neighbor_cure": 0.25,
"prob_tv_spread": 0.1,
}
@state
def cured(self):
prob_cure = self.model["prob_neighbor_cure"]
for neighbor in self.get_neighbors(state_id=self.infected.id):
if self.prob(prob_cure):
try:
neighbor.cure()
except AttributeError:
self.debug("Viewer {} cannot be cured".format(neighbor.id))
def cure(self):
self.has_been_cured = True
@state
def infected(self):
if self.has_been_cured:
return self.cured
cured = max(self.count_neighbors(self.cured.id), 1.0)
infected = max(self.count_neighbors(self.infected.id), 1.0)
prob_cure = self.model["prob_neighbor_cure"] * (cured / infected)
if self.prob(prob_cure):
return self.cured

View File

@@ -0,0 +1,134 @@
from soil.agents import FSM, NetworkAgent, state, default_state, prob
from soil.parameters import *
import logging
from soil.environment import Environment
class DumbViewer(FSM, NetworkAgent):
"""
A viewer that gets infected via TV (if it has one) and tries to infect
its neighbors once it's infected.
"""
has_been_infected: bool = False
has_tv: bool = False
@default_state
@state
def neutral(self):
if self.has_tv:
if self.prob(self.get("prob_tv_spread")):
return self.infected
if self.has_been_infected:
return self.infected
@state
def infected(self):
for neighbor in self.get_neighbors(state_id=self.neutral.id):
if self.prob(self.get("prob_neighbor_spread")):
neighbor.infect()
def infect(self):
"""
This is not a state. It is a function that other agents can use to try to
infect this agent. DumbViewer always gets infected, but other agents like
HerdViewer might not become infected right away
"""
self.has_been_infected = True
class HerdViewer(DumbViewer):
"""
A viewer whose probability of infection depends on the state of its neighbors.
"""
def infect(self):
"""Notice again that this is NOT a state. See DumbViewer.infect for reference"""
infected = self.count_neighbors(state_id=self.infected.id)
total = self.count_neighbors()
prob_infect = self.get("prob_neighbor_spread") * infected / total
self.debug("prob_infect", prob_infect)
if self.prob(prob_infect):
self.has_been_infected = True
class WiseViewer(HerdViewer):
"""
A viewer that can change its mind.
"""
@state
def cured(self):
prob_cure = self.get("prob_neighbor_cure")
for neighbor in self.get_neighbors(state_id=self.infected.id):
if self.prob(prob_cure):
try:
neighbor.cure()
except AttributeError:
self.debug("Viewer {} cannot be cured".format(neighbor.id))
def cure(self):
self.has_been_cured = True
@state
def infected(self):
if self.has_been_cured:
return self.cured
cured = max(self.count_neighbors(self.cured.id), 1.0)
infected = max(self.count_neighbors(self.infected.id), 1.0)
prob_cure = self.get("prob_neighbor_cure") * (cured / infected)
if self.prob(prob_cure):
return self.cured
class NewsSpread(Environment):
ratio_dumb: probability = 1,
ratio_herd: probability = 0,
ratio_wise: probability = 0,
prob_tv_spread: probability = 0.1,
prob_neighbor_spread: probability = 0.1,
prob_neighbor_cure: probability = 0.05,
def init(self):
self.populate_network([DumbViewer, HerdViewer, WiseViewer],
[self.ratio_dumb, self.ratio_herd, self.ratio_wise])
from itertools import product
from soil import Simulation
# We want to investigate the effect of different agent distributions on the spread of news.
# To do that, we will run different simulations, with a varying ratio of DumbViewers, HerdViewers, and WiseViewers
# Because the effect of these agents might also depend on the network structure, we will run our simulations on two different networks:
# one with a small-world structure and one with a connected structure.
counter = 0
for [r1, r2] in product([0, 0.5, 1.0], repeat=2):
for (generator, netparams) in {
"barabasi_albert_graph": {"m": 5},
"erdos_renyi_graph": {"p": 0.1},
}.items():
print(r1, r2, 1-r1-r2, generator)
# Create new simulation
netparams["n"] = 500
Simulation(
name='newspread_sim',
model=NewsSpread,
parameters=dict(
ratio_dumb=r1,
ratio_herd=r2,
ratio_wise=1-r1-r2,
network_generator=generator,
network_params=netparams,
prob_neighbor_spread=0,
),
iterations=5,
max_steps=300,
dump=False,
).run()
counter += 1
# Run all the necessary instances
print(f"A total of {counter} simulations were run.")

View File

@@ -1,41 +0,0 @@
"""
Example of a fully programmatic simulation, without definition files.
"""
from soil import Simulation, agents
from networkx import Graph
import logging
def mygenerator():
# Add only a node
G = Graph()
G.add_node(1)
return G
class MyAgent(agents.FSM):
@agents.default_state
@agents.state
def neutral(self):
self.debug("I am running")
if agents.prob(0.2):
self.info("This runs 2/10 times on average")
s = Simulation(
name="Programmatic",
network_params={"generator": mygenerator},
num_trials=1,
max_time=100,
agent_class=MyAgent,
dry_run=True,
)
# By default, logging will only print WARNING logs (and above).
# You need to choose a lower logging level to get INFO/DEBUG traces
logging.basicConfig(level=logging.INFO)
envs = s.run()
# Uncomment this to output the simulation to a YAML file
# s.dump_yaml('simulation.yaml')

View File

@@ -0,0 +1,53 @@
"""
Example of a fully programmatic simulation, without definition files.
"""
from soil import Simulation, Environment, agents
from networkx import Graph
import logging
def mygenerator():
# Add only a node
G = Graph()
G.add_node(1)
G.add_node(2)
return G
class MyAgent(agents.NetworkAgent, agents.FSM):
times_run = 0
@agents.default_state
@agents.state
def neutral(self):
self.debug("I am running")
if self.prob(0.2):
self.times_run += 1
self.info("This runs 2/10 times on average")
class ProgrammaticEnv(Environment):
def init(self):
self.create_network(generator=mygenerator)
assert len(self.G)
self.populate_network(agent_class=MyAgent)
self.add_agent_reporter('times_run')
simulation = Simulation(
name="Programmatic",
model=ProgrammaticEnv,
seed='Program',
iterations=1,
max_time=100,
dump=False,
)
if __name__ == "__main__":
# By default, logging will only print WARNING logs (and above).
# You need to choose a lower logging level to get INFO/DEBUG traces
logging.basicConfig(level=logging.INFO)
envs = simulation.run()
for agent in envs[0].agents:
print(agent.times_run)

View File

@@ -1,26 +0,0 @@
---
name: pubcrawl
num_trials: 3
max_steps: 10
dump: false
network_params:
# Generate 100 empty nodes. They will be assigned a network agent
generator: empty_graph
n: 30
network_agents:
- agent_class: pubcrawl.Patron
description: Extroverted patron
state:
openness: 1.0
weight: 9
- agent_class: pubcrawl.Patron
description: Introverted patron
state:
openness: 0.1
weight: 1
environment_agents:
- agent_class: pubcrawl.Police
environment_class: pubcrawl.CityPubs
environment_params:
altercations: 0
number_of_pubs: 3

View File

@@ -1,6 +1,7 @@
from soil.agents import FSM, NetworkAgent, state, default_state
from soil import Environment
from soil import Environment, Simulation, parameters
from itertools import islice
import networkx as nx
import logging
@@ -8,19 +9,24 @@ class CityPubs(Environment):
"""Environment with Pubs"""
level = logging.INFO
def __init__(self, *args, number_of_pubs=3, pub_capacity=10, **kwargs):
super(CityPubs, self).__init__(*args, **kwargs)
pubs = {}
for i in range(number_of_pubs):
number_of_pubs: parameters.Integer = 3
ratio_extroverted: parameters.probability = 0.1
pub_capacity: parameters.Integer = 10
def init(self):
self.pubs = {}
for i in range(self.number_of_pubs):
newpub = {
"name": "The awesome pub #{}".format(i),
"open": True,
"capacity": pub_capacity,
"capacity": self.pub_capacity,
"occupancy": 0,
}
pubs[newpub["name"]] = newpub
self["pubs"] = pubs
self.pubs[newpub["name"]] = newpub
self.add_agent(agent_class=Police)
self.populate_network([Patron.w(openness=0.1), Patron.w(openness=1)],
[self.ratio_extroverted, 1-self.ratio_extroverted])
assert all(["agent" in node and isinstance(node["agent"], Patron) for (_, node) in self.G.nodes(data=True)])
def enter(self, pub_id, *nodes):
"""Agents will try to enter. The pub checks if it is possible"""
@@ -146,10 +152,10 @@ class Patron(FSM, NetworkAgent):
continue
if friend.befriend(self):
self.befriend(friend, force=True)
self.debug("Hooray! new friend: {}".format(friend.id))
self.debug("Hooray! new friend: {}".format(friend.unique_id))
befriended = True
else:
self.debug("{} does not want to be friends".format(friend.id))
self.debug("{} does not want to be friends".format(friend.unique_id))
return befriended
@@ -163,13 +169,27 @@ class Police(FSM):
def patrol(self):
drunksters = list(self.get_agents(drunk=True, state_id=Patron.drunk_in_pub.id))
for drunk in drunksters:
self.info("Kicking out the trash: {}".format(drunk.id))
self.info("Kicking out the trash: {}".format(drunk.unique_id))
drunk.kick_out()
else:
self.info("No trash to take out. Too bad.")
if __name__ == "__main__":
from soil import simulation
sim = Simulation(
model=CityPubs,
name="pubcrawl",
iterations=3,
max_steps=10,
dump=False,
parameters=dict(
network_generator=nx.empty_graph,
network_params={"n": 30},
model=CityPubs,
altercations=0,
number_of_pubs=3,
)
)
simulation.run_from_config("pubcrawl.yml", dry_run=True, dump=None, parallel=False)
if __name__ == "__main__":
sim.run(parallel=False)

View File

@@ -1,42 +0,0 @@
---
version: '2'
name: rabbits_basic
num_trials: 1
seed: MySeed
description: null
group: null
interval: 1.0
max_time: 100
model_class: rabbit_agents.RabbitEnv
model_params:
agents:
topology: true
distribution:
- agent_class: rabbit_agents.Male
weight: 1
- agent_class: rabbit_agents.Female
weight: 1
fixed:
- agent_class: rabbit_agents.RandomAccident
topology: false
hidden: true
state:
group: environment
state:
group: network
mating_prob: 0.1
prob_death: 0.001
topology:
fixed:
directed: true
links: []
nodes:
- id: 1
- id: 0
model_reporters:
num_males: 'num_males'
num_females: 'num_females'
num_rabbits: |
py:lambda env: env.num_males + env.num_females
extra:
visualization_params: {}

View File

@@ -1,42 +0,0 @@
---
version: '2'
name: rabbits_improved
num_trials: 1
seed: MySeed
description: null
group: null
interval: 1.0
max_time: 100
model_class: rabbit_agents.RabbitEnv
model_params:
agents:
topology: true
distribution:
- agent_class: rabbit_agents.Male
weight: 1
- agent_class: rabbit_agents.Female
weight: 1
fixed:
- agent_class: rabbit_agents.RandomAccident
topology: false
hidden: true
state:
group: environment
state:
group: network
mating_prob: 0.1
prob_death: 0.001
topology:
fixed:
directed: true
links: []
nodes:
- id: 1
- id: 0
model_reporters:
num_males: 'num_males'
num_females: 'num_females'
num_rabbits: |
py:lambda env: env.num_males + env.num_females
extra:
visualization_params: {}

View File

@@ -1,23 +1,20 @@
from soil import FSM, state, default_state, BaseAgent, NetworkAgent, Environment
from soil import FSM, state, default_state, BaseAgent, NetworkAgent, Environment, Simulation
from soil.time import Delta
from enum import Enum
from collections import Counter
import logging
import math
from rabbits_basic_sim import RabbitEnv
class RabbitEnv(Environment):
@property
def num_rabbits(self):
return self.count_agents(agent_class=Rabbit)
@property
def num_males(self):
return self.count_agents(agent_class=Male)
@property
def num_females(self):
return self.count_agents(agent_class=Female)
class RabbitsImprovedEnv(RabbitEnv):
def init(self):
"""Initialize the environment with the new versions of the agents"""
a1 = self.add_node(Male)
a2 = self.add_node(Female)
a1.add_edge(a2)
self.add_agent(RandomAccident)
class Rabbit(FSM, NetworkAgent):
@@ -150,8 +147,7 @@ class RandomAccident(BaseAgent):
self.debug("Rabbits alive: {}".format(rabbits_alive))
if __name__ == "__main__":
from soil import easy
sim = Simulation(model=RabbitsImprovedEnv, max_time=100, seed="MySeed", iterations=1)
with easy("rabbits.yml") as sim:
sim.run()
if __name__ == "__main__":
sim.run()

View File

@@ -1,18 +1,29 @@
from soil import FSM, state, default_state, BaseAgent, NetworkAgent, Environment
from soil import FSM, state, default_state, BaseAgent, NetworkAgent, Environment, Simulation, report, parameters as params
from collections import Counter
import logging
import math
class RabbitEnv(Environment):
prob_death: params.probability = 1e-100
def init(self):
a1 = self.add_node(Male)
a2 = self.add_node(Female)
a1.add_edge(a2)
self.add_agent(RandomAccident)
@report
@property
def num_rabbits(self):
return self.count_agents(agent_class=Rabbit)
@report
@property
def num_males(self):
return self.count_agents(agent_class=Male)
@report
@property
def num_females(self):
return self.count_agents(agent_class=Female)
@@ -129,11 +140,11 @@ class RandomAccident(BaseAgent):
if not rabbits_alive:
return self.die()
prob_death = self.model.get("prob_death", 1e-100) * math.floor(
prob_death = self.model.prob_death * math.floor(
math.log10(max(1, rabbits_alive))
)
self.debug("Killing some rabbits with prob={}!".format(prob_death))
for i in self.iter_agents(agent_class=Rabbit):
for i in self.get_agents(agent_class=Rabbit):
if i.state_id == i.dead.id:
continue
if self.prob(prob_death):
@@ -143,8 +154,8 @@ class RandomAccident(BaseAgent):
self.debug("Rabbits alive: {}".format(rabbits_alive))
if __name__ == "__main__":
from soil import easy
with easy("rabbits.yml") as sim:
sim.run()
sim = Simulation(model=RabbitEnv, max_time=100, seed="MySeed", iterations=1)
if __name__ == "__main__":
sim.run()

View File

@@ -2,7 +2,7 @@
Example of setting a
Example of a fully programmatic simulation, without definition files.
"""
from soil import Simulation, agents
from soil import Simulation, agents, Environment
from soil.time import Delta
@@ -29,14 +29,18 @@ class MyAgent(agents.FSM):
return None, Delta(self.random.expovariate(1 / 16))
class RandomEnv(Environment):
def init(self):
self.add_agent(agent_class=MyAgent)
s = Simulation(
name="Programmatic",
network_agents=[{"agent_class": MyAgent, "id": 0}],
topology={"nodes": [{"id": 0}], "links": []},
num_trials=1,
model=RandomEnv,
iterations=1,
max_time=100,
agent_class=MyAgent,
dry_run=True,
dump=False,
)

View File

@@ -1,30 +0,0 @@
---
sampler:
method: "SALib.sample.morris.sample"
N: 10
template:
group: simple
num_trials: 1
interval: 1
max_steps: 2
seed: "CompleteSeed!"
dump: false
model_params:
network_params:
generator: complete_graph
n: 10
network_agents:
- agent_class: CounterModel
weight: "{{ x1 }}"
state:
state_id: 0
- agent_class: AggregatedCounter
weight: "{{ 1 - x1 }}"
name: "{{ x3 }}"
skip_test: true
vars:
bounds:
x1: [0, 1]
x2: [1, 2]
fixed:
x3: ["a", "b", "c"]

View File

@@ -1,62 +0,0 @@
name: TerroristNetworkModel_sim
max_steps: 150
num_trials: 1
model_params:
network_params:
generator: random_geometric_graph
radius: 0.2
# generator: geographical_threshold_graph
# theta: 20
n: 100
network_agents:
- agent_class: TerroristNetworkModel.TerroristNetworkModel
weight: 0.8
state:
id: civilian # Civilians
- agent_class: TerroristNetworkModel.TerroristNetworkModel
weight: 0.1
state:
id: leader # Leaders
- agent_class: TerroristNetworkModel.TrainingAreaModel
weight: 0.05
state:
id: terrorist # Terrorism
- agent_class: TerroristNetworkModel.HavenModel
weight: 0.05
state:
id: civilian # Civilian
# TerroristSpreadModel
information_spread_intensity: 0.7
terrorist_additional_influence: 0.035
max_vulnerability: 0.7
prob_interaction: 0.5
# TrainingAreaModel and HavenModel
training_influence: 0.20
haven_influence: 0.20
# TerroristNetworkModel
vision_range: 0.30
sphere_influence: 2
weight_social_distance: 0.035
weight_link_distance: 0.035
visualization_params:
# Icons downloaded from https://www.iconfinder.com/
shape_property: agent
shapes:
TrainingAreaModel: target
HavenModel: home
TerroristNetworkModel: person
colors:
- attr_id: civilian
color: '#40de40'
- attr_id: terrorist
color: red
- attr_id: leader
color: '#c16a6a'
background_image: 'map_4800x2860.jpg'
background_opacity: '0.9'
background_filter_color: 'blue'
skip_test: true # This simulation takes too long for automated tests.

View File

@@ -1,8 +1,47 @@
import networkx as nx
from soil.agents import Geo, NetworkAgent, FSM, state, default_state
from soil import Environment
from soil.agents import Geo, NetworkAgent, FSM, custom, state, default_state
from soil import Environment, Simulation
from soil.parameters import *
from soil.utils import int_seed
class TerroristEnvironment(Environment):
n: Integer = 100
radius: Float = 0.2
information_spread_intensity: probability = 0.7
terrorist_additional_influence: probability = 0.03
terrorist_additional_influence: probability = 0.035
max_vulnerability: probability = 0.7
prob_interaction: probability = 0.5
# TrainingAreaModel and HavenModel
training_influence: probability = 0.20
haven_influence: probability = 0.20
# TerroristNetworkModel
vision_range: Float = 0.30
sphere_influence: Integer = 2
weight_social_distance: Float = 0.035
weight_link_distance: Float = 0.035
ratio_civil: probability = 0.8
ratio_leader: probability = 0.1
ratio_training: probability = 0.05
ratio_haven: probability = 0.05
def init(self):
self.create_network(generator=self.generator, n=self.n, radius=self.radius)
self.populate_network([
TerroristNetworkModel.w(state_id='civilian'),
TerroristNetworkModel.w(state_id='leader'),
TrainingAreaModel,
HavenModel
], [self.ratio_civil, self.ratio_leader, self.ratio_training, self.ratio_haven])
def generator(self, *args, **kwargs):
return nx.random_geometric_graph(*args, **kwargs, seed=int_seed(self._seed))
class TerroristSpreadModel(FSM, Geo):
"""
Settings:
@@ -13,47 +52,35 @@ class TerroristSpreadModel(FSM, Geo):
min_vulnerability (optional else zero)
max_vulnerability
prob_interaction
"""
def __init__(self, model=None, unique_id=0, state=()):
super().__init__(model=model, unique_id=unique_id, state=state)
information_spread_intensity = 0.1
terrorist_additional_influence = 0.1
min_vulnerability = 0
max_vulnerability = 1
self.information_spread_intensity = model.environment_params[
"information_spread_intensity"
]
self.terrorist_additional_influence = model.environment_params[
"terrorist_additional_influence"
]
self.prob_interaction = model.environment_params["prob_interaction"]
if self["id"] == self.civilian.id: # Civilian
self.mean_belief = self.random.uniform(0.00, 0.5)
elif self["id"] == self.terrorist.id: # Terrorist
def init(self):
if self.state_id == self.civilian.id: # Civilian
self.mean_belief = self.model.random.uniform(0.00, 0.5)
elif self.state_id == self.terrorist.id: # Terrorist
self.mean_belief = self.random.uniform(0.8, 1.00)
elif self["id"] == self.leader.id: # Leader
elif self.state_id == self.leader.id: # Leader
self.mean_belief = 1.00
else:
raise Exception("Invalid state id: {}".format(self["id"]))
if "min_vulnerability" in model.environment_params:
self.vulnerability = self.random.uniform(
model.environment_params["min_vulnerability"],
model.environment_params["max_vulnerability"],
)
else:
self.vulnerability = self.random.uniform(
0, model.environment_params["max_vulnerability"]
)
self.vulnerability = self.random.uniform(
self.get("min_vulnerability", 0), self.get("max_vulnerability", 1)
)
@default_state
@state
def civilian(self):
neighbours = list(self.get_neighbors(agent_class=TerroristSpreadModel))
if len(neighbours) > 0:
# Only interact with some of the neighbors
interactions = list(
n for n in neighbours if self.random.random() <= self.prob_interaction
n for n in neighbours if self.random.random() <= self.model.prob_interaction
)
influence = sum(self.degree(i) for i in interactions)
mean_belief = sum(
@@ -99,7 +126,7 @@ class TerroristSpreadModel(FSM, Geo):
)
# Check if there are any leaders in the group
leaders = list(filter(lambda x: x.state.id == self.leader.id, neighbours))
leaders = list(filter(lambda x: x.state_id == self.leader.id, neighbours))
if not leaders:
# Check if this is the potential leader
# Stop once it's found. Otherwise, set self as leader
@@ -108,14 +135,13 @@ class TerroristSpreadModel(FSM, Geo):
return
return self.leader
def ego_search(self, steps=1, center=False, node=None, **kwargs):
def ego_search(self, steps=1, center=False, agent=None, **kwargs):
"""Get a list of nodes in the ego network of *node* of radius *steps*"""
node = as_node(node if node is not None else self)
node = agent.node_id if agent else self.node_id
G = self.subgraph(**kwargs)
return nx.ego_graph(G, node, center=center, radius=steps).nodes()
def degree(self, node, force=False):
node = as_node(node)
def degree(self, agent, force=False):
if (
force
or (not hasattr(self.model, "_degree"))
@@ -123,10 +149,9 @@ class TerroristSpreadModel(FSM, Geo):
):
self.model._degree = nx.degree_centrality(self.G)
self.model._last_step = self.now
return self.model._degree[node]
return self.model._degree[agent.node_id]
def betweenness(self, node, force=False):
node = as_node(node)
def betweenness(self, agent, force=False):
if (
force
or (not hasattr(self.model, "_betweenness"))
@@ -134,7 +159,7 @@ class TerroristSpreadModel(FSM, Geo):
):
self.model._betweenness = nx.betweenness_centrality(self.G)
self.model._last_step = self.now
return self.model._betweenness[node]
return self.model._betweenness[agent.node_id]
class TrainingAreaModel(FSM, Geo):
@@ -147,13 +172,12 @@ class TrainingAreaModel(FSM, Geo):
Requires TerroristSpreadModel.
"""
def __init__(self, model=None, unique_id=0, state=()):
super().__init__(model=model, unique_id=unique_id, state=state)
self.training_influence = model.environment_params["training_influence"]
if "min_vulnerability" in model.environment_params:
self.min_vulnerability = model.environment_params["min_vulnerability"]
else:
self.min_vulnerability = 0
training_influence = 0.1
min_vulnerability = 0
def init(self):
self.mean_believe = 1
self.vulnerability = 0
@default_state
@state
@@ -177,18 +201,19 @@ class HavenModel(FSM, Geo):
Requires TerroristSpreadModel.
"""
def __init__(self, model=None, unique_id=0, state=()):
super().__init__(model=model, unique_id=unique_id, state=state)
self.haven_influence = model.environment_params["haven_influence"]
if "min_vulnerability" in model.environment_params:
self.min_vulnerability = model.environment_params["min_vulnerability"]
else:
self.min_vulnerability = 0
self.max_vulnerability = model.environment_params["max_vulnerability"]
min_vulnerability = 0
haven_influence = 0.1
max_vulnerability = 0.5
def init(self):
self.mean_believe = 0
self.vulnerability = 0
def get_occupants(self, **kwargs):
return self.get_neighbors(agent_class=TerroristSpreadModel, **kwargs)
return self.get_neighbors(agent_class=TerroristSpreadModel,
**kwargs)
@default_state
@state
def civilian(self):
civilians = self.get_occupants(state_id=self.civilian.id)
@@ -224,13 +249,10 @@ class TerroristNetworkModel(TerroristSpreadModel):
weight_link_distance
"""
def __init__(self, model=None, unique_id=0, state=()):
super().__init__(model=model, unique_id=unique_id, state=state)
self.vision_range = model.environment_params["vision_range"]
self.sphere_influence = model.environment_params["sphere_influence"]
self.weight_social_distance = model.environment_params["weight_social_distance"]
self.weight_link_distance = model.environment_params["weight_link_distance"]
sphere_influence: float = 1
vision_range: float = 1
weight_social_distance: float = 0.5
weight_link_distance: float = 0.2
@state
def terrorist(self):
@@ -257,28 +279,26 @@ class TerroristNetworkModel(TerroristSpreadModel):
)
)
neighbours = set(
agent.id
for agent in self.get_neighbors(
agent_class=TerroristNetworkModel
)
agent.unique_id
for agent in self.get_neighbors(agent_class=TerroristNetworkModel)
)
search = (close_ups | step_neighbours) - neighbours
for agent in self.get_agents(search):
social_distance = 1 / self.shortest_path_length(agent.id)
spatial_proximity = 1 - self.get_distance(agent.id)
social_distance = 1 / self.shortest_path_length(agent.unique_id)
spatial_proximity = 1 - self.get_distance(agent.unique_id)
prob_new_interaction = (
self.weight_social_distance * social_distance
+ self.weight_link_distance * spatial_proximity
)
if (
agent["id"] == agent.civilian.id
agent.state_id == "civilian"
and self.random.random() < prob_new_interaction
):
self.add_edge(agent)
break
def get_distance(self, target):
source_x, source_y = nx.get_node_attributes(self.G, "pos")[self.id]
source_x, source_y = nx.get_node_attributes(self.G, "pos")[self.unique_id]
target_x, target_y = nx.get_node_attributes(self.G, "pos")[target]
dx = abs(source_x - target_x)
dy = abs(source_y - target_y)
@@ -286,6 +306,36 @@ class TerroristNetworkModel(TerroristSpreadModel):
def shortest_path_length(self, target):
try:
return nx.shortest_path_length(self.G, self.id, target)
return nx.shortest_path_length(self.G, self.unique_id, target)
except nx.NetworkXNoPath:
return float("inf")
sim = Simulation(
model=TerroristEnvironment,
iterations=1,
name="TerroristNetworkModel_sim",
max_steps=150,
seed="default2",
skip_test=False,
dump=False,
)
# TODO: integrate visualization
# visualization_params:
# # Icons downloaded from https://www.iconfinder.com/
# shape_property: agent
# shapes:
# TrainingAreaModel: target
# HavenModel: home
# TerroristNetworkModel: person
# colors:
# - attr_id: civilian
# color: '#40de40'
# - attr_id: terrorist
# color: red
# - attr_id: leader
# color: '#c16a6a'
# background_image: 'map_4800x2860.jpg'
# background_opacity: '0.9'
# background_filter_color: 'blue'

View File

@@ -1,15 +0,0 @@
---
name: torvalds_example
max_steps: 10
interval: 2
model_params:
agent_class: CounterModel
default_state:
skill_level: 'beginner'
network_params:
path: 'torvalds.edgelist'
states:
Torvalds:
skill_level: 'God'
balkian:
skill_level: 'developer'

25
examples/torvalds_sim.py Normal file
View File

@@ -0,0 +1,25 @@
from soil import Environment, Simulation, CounterModel, report
# Get directory path for current file
import os, sys, inspect
currentdir = os.path.dirname(os.path.abspath(inspect.getfile(inspect.currentframe())))
class TorvaldsEnv(Environment):
def init(self):
self.create_network(path=os.path.join(currentdir, 'torvalds.edgelist'))
self.populate_network(CounterModel, skill_level='beginner')
self.agent(node_id="Torvalds").skill_level = 'God'
self.agent(node_id="balkian").skill_level = 'developer'
self.add_agent_reporter("times")
@report
def god_developers(self):
return self.count_agents(skill_level='God')
sim = Simulation(name='torvalds_example',
max_steps=10,
interval=2,
model=TorvaldsEnv)

File diff suppressed because one or more lines are too long

View File

@@ -5,6 +5,9 @@ pyyaml>=5.1
pandas>=1
SALib>=1.3
Jinja2
Mesa>=1.1
Mesa>=1.2
pydantic>=1.9
sqlalchemy>=1.4
typing-extensions>=4.4
annotated-types>=0.4
tqdm>=4.64

View File

@@ -1,3 +1,7 @@
[metadata]
long_description = file: README.md
long_description_content_type = text/markdown
[aliases]
test=pytest
[tool:pytest]

View File

@@ -17,9 +17,9 @@ def parse_requirements(filename):
install_reqs = parse_requirements("requirements.txt")
test_reqs = parse_requirements("test-requirements.txt")
extras_require={
'mesa': ['mesa>=0.8.9'],
'geo': ['scipy>=1.3'],
'web': ['tornado']
'web': ['tornado'],
'ipython': ['ipython==8.12', 'nbformat==5.8'],
}
extras_require['all'] = [dep for package in extras_require.values() for dep in package]
@@ -44,13 +44,18 @@ setup(
'Operating System :: MacOS :: MacOS X',
'Operating System :: Microsoft :: Windows',
'Operating System :: POSIX',
'Programming Language :: Python :: 3'],
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
],
install_requires=install_reqs,
extras_require=extras_require,
tests_require=test_reqs,
setup_requires=['pytest-runner', ],
pytest_plugins = ['pytest_profiling'],
include_package_data=True,
python_requires=">=3.8",
entry_points={
'console_scripts':
['soil = soil.__main__:main',

View File

@@ -1 +1 @@
0.30.0rc2
1.0.0rc2

View File

@@ -1,6 +1,7 @@
from __future__ import annotations
import importlib
from importlib.resources import path
import sys
import os
import logging
@@ -14,29 +15,34 @@ try:
except NameError:
basestring = str
from pathlib import Path
from .analysis import *
from .agents import *
from . import agents
from .simulation import *
from .environment import Environment, EventedEnvironment
from .datacollection import SoilCollector
from . import serialization
from .utils import logger
from .time import *
from .decorators import *
def main(
cfg="simulation.yml",
exporters=None,
parallel=None,
num_processes=1,
output="soil_output",
*,
do_run=False,
debug=False,
pdb=False,
**kwargs,
):
sim = None
if isinstance(cfg, Simulation):
sim = cfg
import argparse
from . import simulation
@@ -47,7 +53,7 @@ def main(
"file",
type=str,
nargs="?",
default=cfg if sim is None else '',
default=cfg if sim is None else "",
help="Configuration file for the simulation (e.g., YAML or JSON)",
)
parser.add_argument(
@@ -63,6 +69,11 @@ def main(
"--dry-run",
"--dry",
action="store_true",
help="Do not run the simulation",
)
parser.add_argument(
"--no-dump",
action="store_true",
help="Do not store the results of the simulation to disk, show in terminal instead.",
)
parser.add_argument(
@@ -77,7 +88,7 @@ def main(
"--graph",
"-g",
action="store_true",
help="Dump each trial's network topology as a GEXF graph. Defaults to false.",
help="Dump each iteration's network topology as a GEXF graph. Defaults to false.",
)
parser.add_argument(
"--csv",
@@ -92,12 +103,11 @@ def main(
default=output or "soil_output",
help="folder to write results to. It defaults to the current directory.",
)
if parallel is None:
parser.add_argument(
"--synchronous",
action="store_true",
help="Run trials serially and synchronously instead of in parallel. Defaults to false.",
)
parser.add_argument(
"--num-processes",
default=num_processes,
help="Number of processes to use for parallel execution. Defaults to 1.",
)
parser.add_argument(
"-e",
@@ -106,6 +116,29 @@ def main(
default=[],
help="Export environment and/or simulations using this exporter",
)
parser.add_argument(
"--max_time",
default="-1",
help="Set maximum time for the simulation to run. ",
)
parser.add_argument(
"--max_steps",
default="-1",
help="Set maximum number of steps for the simulation to run.",
)
parser.add_argument(
"--iterations",
default="",
help="Set maximum number of iterations (runs) for the simulation.",
)
parser.add_argument(
"--seed",
default=None,
help="Manually set a seed for the simulation.",
)
parser.add_argument(
"--only-convert",
@@ -127,14 +160,12 @@ def main(
)
args = parser.parse_args()
logger.setLevel(getattr(logging, (args.level or "INFO").upper()))
level = getattr(logging, (args.level or "INFO").upper())
logger.setLevel(level)
if args.version:
return
if parallel is None:
parallel = not args.synchronous
exporters = exporters or [
"default",
]
@@ -162,38 +193,49 @@ def main(
res = []
try:
exp_params = {}
opts = dict(
dry_run=args.dry_run,
dump=not args.no_dump,
debug=debug,
exporters=exporters,
num_processes=args.num_processes,
level=level,
outdir=output,
exporter_params=exp_params,
**kwargs)
if args.seed is not None:
opts["seed"] = args.seed
if args.iterations:
opts["iterations"] =int(args.iterations)
if sim:
logger.info("Loading simulation instance")
sim.dry_run = args.dry_run
sim.exporters = exporters
sim.parallel = parallel
sim.outdir = output
sims = [sim, ]
for (k, v) in opts.items():
setattr(sim, k, v)
sims = [sim]
else:
logger.info("Loading config file: {}".format(args.file))
if not os.path.exists(args.file):
logger.error("Please, input a valid file")
return
sims = list(simulation.iter_from_config(
args.file,
dry_run=args.dry_run,
exporters=exporters,
parallel=parallel,
outdir=output,
exporter_params=exp_params,
**kwargs,
))
assert opts["debug"] == debug
sims = list(
simulation.iter_from_file(
args.file,
**opts,
)
)
for sim in sims:
assert sim.debug == debug
if args.set:
for s in args.set:
k, v = s.split("=", 1)[:2]
v = eval(v)
tail, *head = k.rsplit(".", 1)[::-1]
target = sim
target = sim.parameters
if head:
for part in head[0].split("."):
try:
@@ -208,11 +250,9 @@ def main(
if args.only_convert:
print(sim.to_yaml())
continue
if do_run:
res.append(sim.run())
else:
print("not running")
res.append(sim)
max_time = float(args.max_time) if args.max_time != "-1" else None
max_steps = float(args.max_steps) if args.max_steps != "-1" else None
res.append(sim.run(max_time=max_time, max_steps=max_steps))
except Exception as ex:
if args.pdb:
@@ -233,7 +273,7 @@ def main(
@contextmanager
def easy(cfg, pdb=False, debug=False, **kwargs):
try:
yield main(cfg, debug=debug, pdb=pdb, **kwargs)[0]
return main(cfg, debug=debug, pdb=pdb, **kwargs)[0]
except Exception as e:
if os.environ.get("SOIL_POSTMORTEM"):
from .debugging import post_mortem
@@ -244,4 +284,4 @@ def easy(cfg, pdb=False, debug=False, **kwargs):
if __name__ == "__main__":
main(do_run=True)
main()

View File

@@ -2,8 +2,8 @@ from . import main as init_main
def main():
init_main(do_run=True)
init_main()
if __name__ == "__main__":
init_main(do_run=True)
init_main()

View File

@@ -22,10 +22,10 @@ class BassModel(FSM):
else:
aware_neighbors = self.get_neighbors(state_id=self.aware.id)
num_neighbors_aware = len(aware_neighbors)
if self.prob((self["imitation_prob"] * num_neighbors_aware)):
if self.prob((self.imitation_prob * num_neighbors_aware)):
self.sentimentCorrelation = 1
return self.aware
@state
def aware(self):
self.die()
self.die()

View File

@@ -1,118 +0,0 @@
from . import FSM, state, default_state
class BigMarketModel(FSM):
"""
Settings:
Names:
enterprises [Array]
tweet_probability_enterprises [Array]
Users:
tweet_probability_users
tweet_relevant_probability
tweet_probability_about [Array]
sentiment_about [Array]
"""
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.enterprises = self.env.environment_params["enterprises"]
self.type = ""
if self.id < len(self.enterprises): # Enterprises
self._set_state(self.enterprise.id)
self.type = "Enterprise"
self.tweet_probability = environment.environment_params[
"tweet_probability_enterprises"
][self.id]
else: # normal users
self.type = "User"
self._set_state(self.user.id)
self.tweet_probability = environment.environment_params[
"tweet_probability_users"
]
self.tweet_relevant_probability = environment.environment_params[
"tweet_relevant_probability"
]
self.tweet_probability_about = environment.environment_params[
"tweet_probability_about"
] # List
self.sentiment_about = environment.environment_params[
"sentiment_about"
] # List
@state
def enterprise(self):
if self.random.random() < self.tweet_probability: # Tweets
aware_neighbors = self.get_neighbors(
state_id=self.number_of_enterprises
) # Nodes neighbour users
for x in aware_neighbors:
if self.random.uniform(0, 10) < 5:
x.sentiment_about[self.id] += 0.1 # Increments for enterprise
else:
x.sentiment_about[self.id] -= 0.1 # Decrements for enterprise
# Establecemos limites
if x.sentiment_about[self.id] > 1:
x.sentiment_about[self.id] = 1
if x.sentiment_about[self.id] < -1:
x.sentiment_about[self.id] = -1
x.attrs[
"sentiment_enterprise_%s" % self.enterprises[self.id]
] = x.sentiment_about[self.id]
@state
def user(self):
if self.random.random() < self.tweet_probability: # Tweets
if (
self.random.random() < self.tweet_relevant_probability
): # Tweets something relevant
# Tweet probability per enterprise
for i in range(len(self.enterprises)):
random_num = self.random.random()
if random_num < self.tweet_probability_about[i]:
# The condition is fulfilled, sentiments are evaluated towards that enterprise
if self.sentiment_about[i] < 0:
# NEGATIVO
self.userTweets("negative", i)
elif self.sentiment_about[i] == 0:
# NEUTRO
pass
else:
# POSITIVO
self.userTweets("positive", i)
for i in range(
len(self.enterprises)
): # So that it never is set to 0 if there are not changes (logs)
self.attrs[
"sentiment_enterprise_%s" % self.enterprises[i]
] = self.sentiment_about[i]
def userTweets(self, sentiment, enterprise):
aware_neighbors = self.get_neighbors(
state_id=self.number_of_enterprises
) # Nodes neighbours users
for x in aware_neighbors:
if sentiment == "positive":
x.sentiment_about[enterprise] += 0.003
elif sentiment == "negative":
x.sentiment_about[enterprise] -= 0.003
else:
pass
# Establecemos limites
if x.sentiment_about[enterprise] > 1:
x.sentiment_about[enterprise] = 1
if x.sentiment_about[enterprise] < -1:
x.sentiment_about[enterprise] = -1
x.attrs[
"sentiment_enterprise_%s" % self.enterprises[enterprise]
] = x.sentiment_about[enterprise]

View File

@@ -1,6 +1,12 @@
from . import NetworkAgent
from . import BaseAgent, NetworkAgent
class Ticker(BaseAgent):
times = 0
def step(self):
self.times += 1
class CounterModel(NetworkAgent):
"""
Dummy behaviour. It counts the number of nodes in the network and neighbors

View File

@@ -1,14 +1,14 @@
from scipy.spatial import cKDTree as KDTree
import networkx as nx
from . import NetworkAgent, as_node
from . import NetworkAgent
class Geo(NetworkAgent):
"""In this type of network, nodes have a "pos" attribute."""
def geo_search(self, radius, node=None, center=False, **kwargs):
def geo_search(self, radius, center=False, **kwargs):
"""Get a list of nodes whose coordinates are closer than *radius* to *node*."""
node = as_node(node if node is not None else self)
node = self.node_id
G = self.subgraph(**kwargs)
@@ -18,4 +18,4 @@ class Geo(NetworkAgent):
nodes, coords = list(zip(*pos.items()))
kdtree = KDTree(coords) # Cannot provide generator.
indices = kdtree.query_ball_point(pos[node], radius)
return [nodes[i] for i in indices if center or (nodes[i] != node)]
return [nodes[i] for i in indices if center or (nodes[i] != node)]

View File

@@ -1,7 +1,7 @@
from . import BaseAgent
from . import Agent, state, default_state
class IndependentCascadeModel(BaseAgent):
class IndependentCascadeModel(Agent):
"""
Settings:
innovation_prob
@@ -9,42 +9,22 @@ class IndependentCascadeModel(BaseAgent):
imitation_prob
"""
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.innovation_prob = self.env.environment_params["innovation_prob"]
self.imitation_prob = self.env.environment_params["imitation_prob"]
self.state["time_awareness"] = 0
self.state["sentimentCorrelation"] = 0
time_awareness = 0
sentimentCorrelation = 0
def step(self):
self.behaviour()
# Outside effects
@default_state
@state
def outside(self):
if self.prob(self.model.innovation_prob):
self.sentimentCorrelation = 1
self.time_awareness = self.model.now # To know when they have been infected
return self.imitate
def behaviour(self):
aware_neighbors_1_time_step = []
# Outside effects
if self.prob(self.innovation_prob):
if self.state["id"] == 0:
self.state["id"] = 1
self.state["sentimentCorrelation"] = 1
self.state[
"time_awareness"
] = self.env.now # To know when they have been infected
else:
pass
@state
def imitate(self):
aware_neighbors = self.get_neighbors(state_id=1, time_awareness=self.now-1)
return
# Imitation effects
if self.state["id"] == 0:
aware_neighbors = self.get_neighbors(state_id=1)
for x in aware_neighbors:
if x.state["time_awareness"] == (self.env.now - 1):
aware_neighbors_1_time_step.append(x)
num_neighbors_aware = len(aware_neighbors_1_time_step)
if self.prob(self.imitation_prob * num_neighbors_aware):
self.state["id"] = 1
self.state["sentimentCorrelation"] = 1
else:
pass
return
if self.prob(self.model.imitation_prob * len(aware_neighbors)):
self.sentimentCorrelation = 1
return self.outside

View File

@@ -1,270 +0,0 @@
import numpy as np
from . import BaseAgent
class SpreadModelM2(BaseAgent):
"""
Settings:
prob_neutral_making_denier
prob_infect
prob_cured_healing_infected
prob_cured_vaccinate_neutral
prob_vaccinated_healing_infected
prob_vaccinated_vaccinate_neutral
prob_generate_anti_rumor
"""
def __init__(self, model=None, unique_id=0, state=()):
super().__init__(model=environment, unique_id=unique_id, state=state)
# Use a single generator with the same seed as `self.random`
random = np.random.default_rng(seed=self._seed)
self.prob_neutral_making_denier = random.normal(
environment.environment_params["prob_neutral_making_denier"],
environment.environment_params["standard_variance"],
)
self.prob_infect = random.normal(
environment.environment_params["prob_infect"],
environment.environment_params["standard_variance"],
)
self.prob_cured_healing_infected = random.normal(
environment.environment_params["prob_cured_healing_infected"],
environment.environment_params["standard_variance"],
)
self.prob_cured_vaccinate_neutral = random.normal(
environment.environment_params["prob_cured_vaccinate_neutral"],
environment.environment_params["standard_variance"],
)
self.prob_vaccinated_healing_infected = random.normal(
environment.environment_params["prob_vaccinated_healing_infected"],
environment.environment_params["standard_variance"],
)
self.prob_vaccinated_vaccinate_neutral = random.normal(
environment.environment_params["prob_vaccinated_vaccinate_neutral"],
environment.environment_params["standard_variance"],
)
self.prob_generate_anti_rumor = random.normal(
environment.environment_params["prob_generate_anti_rumor"],
environment.environment_params["standard_variance"],
)
def step(self):
if self.state["id"] == 0: # Neutral
self.neutral_behaviour()
elif self.state["id"] == 1: # Infected
self.infected_behaviour()
elif self.state["id"] == 2: # Cured
self.cured_behaviour()
elif self.state["id"] == 3: # Vaccinated
self.vaccinated_behaviour()
def neutral_behaviour(self):
# Infected
infected_neighbors = self.get_neighbors(state_id=1)
if len(infected_neighbors) > 0:
if self.prob(self.prob_neutral_making_denier):
self.state["id"] = 3 # Vaccinated making denier
def infected_behaviour(self):
# Neutral
neutral_neighbors = self.get_neighbors(state_id=0)
for neighbor in neutral_neighbors:
if self.prob(self.prob_infect):
neighbor.state["id"] = 1 # Infected
def cured_behaviour(self):
# Vaccinate
neutral_neighbors = self.get_neighbors(state_id=0)
for neighbor in neutral_neighbors:
if self.prob(self.prob_cured_vaccinate_neutral):
neighbor.state["id"] = 3 # Vaccinated
# Cure
infected_neighbors = self.get_neighbors(state_id=1)
for neighbor in infected_neighbors:
if self.prob(self.prob_cured_healing_infected):
neighbor.state["id"] = 2 # Cured
def vaccinated_behaviour(self):
# Cure
infected_neighbors = self.get_neighbors(state_id=1)
for neighbor in infected_neighbors:
if self.prob(self.prob_cured_healing_infected):
neighbor.state["id"] = 2 # Cured
# Vaccinate
neutral_neighbors = self.get_neighbors(state_id=0)
for neighbor in neutral_neighbors:
if self.prob(self.prob_cured_vaccinate_neutral):
neighbor.state["id"] = 3 # Vaccinated
# Generate anti-rumor
infected_neighbors_2 = self.get_neighbors(state_id=1)
for neighbor in infected_neighbors_2:
if self.prob(self.prob_generate_anti_rumor):
neighbor.state["id"] = 2 # Cured
class ControlModelM2(BaseAgent):
"""
Settings:
prob_neutral_making_denier
prob_infect
prob_cured_healing_infected
prob_cured_vaccinate_neutral
prob_vaccinated_healing_infected
prob_vaccinated_vaccinate_neutral
prob_generate_anti_rumor
"""
def __init__(self, model=None, unique_id=0, state=()):
super().__init__(model=environment, unique_id=unique_id, state=state)
self.prob_neutral_making_denier = np.random.normal(
environment.environment_params["prob_neutral_making_denier"],
environment.environment_params["standard_variance"],
)
self.prob_infect = np.random.normal(
environment.environment_params["prob_infect"],
environment.environment_params["standard_variance"],
)
self.prob_cured_healing_infected = np.random.normal(
environment.environment_params["prob_cured_healing_infected"],
environment.environment_params["standard_variance"],
)
self.prob_cured_vaccinate_neutral = np.random.normal(
environment.environment_params["prob_cured_vaccinate_neutral"],
environment.environment_params["standard_variance"],
)
self.prob_vaccinated_healing_infected = np.random.normal(
environment.environment_params["prob_vaccinated_healing_infected"],
environment.environment_params["standard_variance"],
)
self.prob_vaccinated_vaccinate_neutral = np.random.normal(
environment.environment_params["prob_vaccinated_vaccinate_neutral"],
environment.environment_params["standard_variance"],
)
self.prob_generate_anti_rumor = np.random.normal(
environment.environment_params["prob_generate_anti_rumor"],
environment.environment_params["standard_variance"],
)
def step(self):
if self.state["id"] == 0: # Neutral
self.neutral_behaviour()
elif self.state["id"] == 1: # Infected
self.infected_behaviour()
elif self.state["id"] == 2: # Cured
self.cured_behaviour()
elif self.state["id"] == 3: # Vaccinated
self.vaccinated_behaviour()
elif self.state["id"] == 4: # Beacon-off
self.beacon_off_behaviour()
elif self.state["id"] == 5: # Beacon-on
self.beacon_on_behaviour()
def neutral_behaviour(self):
self.state["visible"] = False
# Infected
infected_neighbors = self.get_neighbors(state_id=1)
if len(infected_neighbors) > 0:
if self.random(self.prob_neutral_making_denier):
self.state["id"] = 3 # Vaccinated making denier
def infected_behaviour(self):
# Neutral
neutral_neighbors = self.get_neighbors(state_id=0)
for neighbor in neutral_neighbors:
if self.prob(self.prob_infect):
neighbor.state["id"] = 1 # Infected
self.state["visible"] = False
def cured_behaviour(self):
self.state["visible"] = True
# Vaccinate
neutral_neighbors = self.get_neighbors(state_id=0)
for neighbor in neutral_neighbors:
if self.prob(self.prob_cured_vaccinate_neutral):
neighbor.state["id"] = 3 # Vaccinated
# Cure
infected_neighbors = self.get_neighbors(state_id=1)
for neighbor in infected_neighbors:
if self.prob(self.prob_cured_healing_infected):
neighbor.state["id"] = 2 # Cured
def vaccinated_behaviour(self):
self.state["visible"] = True
# Cure
infected_neighbors = self.get_neighbors(state_id=1)
for neighbor in infected_neighbors:
if self.prob(self.prob_cured_healing_infected):
neighbor.state["id"] = 2 # Cured
# Vaccinate
neutral_neighbors = self.get_neighbors(state_id=0)
for neighbor in neutral_neighbors:
if self.prob(self.prob_cured_vaccinate_neutral):
neighbor.state["id"] = 3 # Vaccinated
# Generate anti-rumor
infected_neighbors_2 = self.get_neighbors(state_id=1)
for neighbor in infected_neighbors_2:
if self.prob(self.prob_generate_anti_rumor):
neighbor.state["id"] = 2 # Cured
def beacon_off_behaviour(self):
self.state["visible"] = False
infected_neighbors = self.get_neighbors(state_id=1)
if len(infected_neighbors) > 0:
self.state["id"] == 5 # Beacon on
def beacon_on_behaviour(self):
self.state["visible"] = False
# Cure (M2 feature added)
infected_neighbors = self.get_neighbors(state_id=1)
for neighbor in infected_neighbors:
if self.prob(self.prob_generate_anti_rumor):
neighbor.state["id"] = 2 # Cured
neutral_neighbors_infected = neighbor.get_neighbors(state_id=0)
for neighbor in neutral_neighbors_infected:
if self.prob(self.prob_generate_anti_rumor):
neighbor.state["id"] = 3 # Vaccinated
infected_neighbors_infected = neighbor.get_neighbors(state_id=1)
for neighbor in infected_neighbors_infected:
if self.prob(self.prob_generate_anti_rumor):
neighbor.state["id"] = 2 # Cured
# Vaccinate
neutral_neighbors = self.get_neighbors(state_id=0)
for neighbor in neutral_neighbors:
if self.prob(self.prob_cured_vaccinate_neutral):
neighbor.state["id"] = 3 # Vaccinated

View File

@@ -1,8 +1,9 @@
import numpy as np
from . import FSM, state
from hashlib import sha512
from . import Agent, state, default_state
class SISaModel(FSM):
class SISaModel(Agent):
"""
Settings:
neutral_discontent_spon_prob
@@ -28,38 +29,45 @@ class SISaModel(FSM):
standard_variance
"""
def __init__(self, environment, unique_id=0, state=()):
super().__init__(model=environment, unique_id=unique_id, state=state)
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
random = np.random.default_rng(seed=self._seed)
seed = self.model._seed
if isinstance(seed, (str, bytes, bytearray)):
if isinstance(seed, str):
seed = seed.encode()
seed = int.from_bytes(seed + sha512(seed).digest(), 'big')
random = np.random.default_rng(seed=seed)
self.neutral_discontent_spon_prob = random.normal(
self.env["neutral_discontent_spon_prob"], self.env["standard_variance"]
self.model.neutral_discontent_spon_prob, self.model.standard_variance
)
self.neutral_discontent_infected_prob = random.normal(
self.env["neutral_discontent_infected_prob"], self.env["standard_variance"]
self.model.neutral_discontent_infected_prob, self.model.standard_variance
)
self.neutral_content_spon_prob = random.normal(
self.env["neutral_content_spon_prob"], self.env["standard_variance"]
self.model.neutral_content_spon_prob, self.model.standard_variance
)
self.neutral_content_infected_prob = random.normal(
self.env["neutral_content_infected_prob"], self.env["standard_variance"]
self.model.neutral_content_infected_prob, self.model.standard_variance
)
self.discontent_neutral = random.normal(
self.env["discontent_neutral"], self.env["standard_variance"]
self.model.discontent_neutral, self.model.standard_variance
)
self.discontent_content = random.normal(
self.env["discontent_content"], self.env["variance_d_c"]
self.model.discontent_content, self.model.variance_d_c
)
self.content_discontent = random.normal(
self.env["content_discontent"], self.env["variance_c_d"]
self.model.content_discontent, self.model.variance_c_d
)
self.content_neutral = random.normal(
self.env["content_neutral"], self.env["standard_variance"]
self.model.discontent_neutral, self.model.standard_variance
)
@default_state
@state
def neutral(self):
# Spontaneous effects
@@ -70,10 +78,10 @@ class SISaModel(FSM):
# Infected
discontent_neighbors = self.count_neighbors(state_id=self.discontent)
if self.prob(scontent_neighbors * self.neutral_discontent_infected_prob):
if self.prob(discontent_neighbors * self.neutral_discontent_infected_prob):
return self.discontent
content_neighbors = self.count_neighbors(state_id=self.content.id)
if self.prob(s * self.neutral_content_infected_prob):
if self.prob(content_neighbors * self.neutral_content_infected_prob):
return self.content
return self.neutral
@@ -85,7 +93,7 @@ class SISaModel(FSM):
# Superinfected
content_neighbors = self.count_neighbors(state_id=self.content.id)
if self.prob(s * self.discontent_content):
if self.prob(content_neighbors * self.discontent_content):
return self.content
return self.discontent
@@ -97,6 +105,6 @@ class SISaModel(FSM):
# Superinfected
discontent_neighbors = self.count_neighbors(state_id=self.discontent.id)
if self.prob(scontent_neighbors * self.content_discontent):
if self.prob(discontent_neighbors * self.content_discontent):
self.discontent
return self.content

View File

@@ -1,115 +0,0 @@
from . import BaseAgent
class SentimentCorrelationModel(BaseAgent):
"""
Settings:
outside_effects_prob
anger_prob
joy_prob
sadness_prob
disgust_prob
"""
def __init__(self, environment, unique_id=0, state=()):
super().__init__(model=environment, unique_id=unique_id, state=state)
self.outside_effects_prob = environment.environment_params[
"outside_effects_prob"
]
self.anger_prob = environment.environment_params["anger_prob"]
self.joy_prob = environment.environment_params["joy_prob"]
self.sadness_prob = environment.environment_params["sadness_prob"]
self.disgust_prob = environment.environment_params["disgust_prob"]
self.state["time_awareness"] = []
for i in range(4): # In this model we have 4 sentiments
self.state["time_awareness"].append(
0
) # 0-> Anger, 1-> joy, 2->sadness, 3 -> disgust
self.state["sentimentCorrelation"] = 0
def step(self):
self.behaviour()
def behaviour(self):
angry_neighbors_1_time_step = []
joyful_neighbors_1_time_step = []
sad_neighbors_1_time_step = []
disgusted_neighbors_1_time_step = []
angry_neighbors = self.get_neighbors(state_id=1)
for x in angry_neighbors:
if x.state["time_awareness"][0] > (self.env.now - 500):
angry_neighbors_1_time_step.append(x)
num_neighbors_angry = len(angry_neighbors_1_time_step)
joyful_neighbors = self.get_neighbors(state_id=2)
for x in joyful_neighbors:
if x.state["time_awareness"][1] > (self.env.now - 500):
joyful_neighbors_1_time_step.append(x)
num_neighbors_joyful = len(joyful_neighbors_1_time_step)
sad_neighbors = self.get_neighbors(state_id=3)
for x in sad_neighbors:
if x.state["time_awareness"][2] > (self.env.now - 500):
sad_neighbors_1_time_step.append(x)
num_neighbors_sad = len(sad_neighbors_1_time_step)
disgusted_neighbors = self.get_neighbors(state_id=4)
for x in disgusted_neighbors:
if x.state["time_awareness"][3] > (self.env.now - 500):
disgusted_neighbors_1_time_step.append(x)
num_neighbors_disgusted = len(disgusted_neighbors_1_time_step)
anger_prob = self.anger_prob + (
len(angry_neighbors_1_time_step) * self.anger_prob
)
joy_prob = self.joy_prob + (len(joyful_neighbors_1_time_step) * self.joy_prob)
sadness_prob = self.sadness_prob + (
len(sad_neighbors_1_time_step) * self.sadness_prob
)
disgust_prob = self.disgust_prob + (
len(disgusted_neighbors_1_time_step) * self.disgust_prob
)
outside_effects_prob = self.outside_effects_prob
num = self.random.random()
if num < outside_effects_prob:
self.state["id"] = self.random.randint(1, 4)
self.state["sentimentCorrelation"] = self.state[
"id"
] # It is stored when it has been infected for the dynamic network
self.state["time_awareness"][self.state["id"] - 1] = self.env.now
self.state["sentiment"] = self.state["id"]
if num < anger_prob:
self.state["id"] = 1
self.state["sentimentCorrelation"] = 1
self.state["time_awareness"][self.state["id"] - 1] = self.env.now
elif num < joy_prob + anger_prob and num > anger_prob:
self.state["id"] = 2
self.state["sentimentCorrelation"] = 2
self.state["time_awareness"][self.state["id"] - 1] = self.env.now
elif num < sadness_prob + anger_prob + joy_prob and num > joy_prob + anger_prob:
self.state["id"] = 3
self.state["sentimentCorrelation"] = 3
self.state["time_awareness"][self.state["id"] - 1] = self.env.now
elif (
num < disgust_prob + sadness_prob + anger_prob + joy_prob
and num > sadness_prob + anger_prob + joy_prob
):
self.state["id"] = 4
self.state["sentimentCorrelation"] = 4
self.state["time_awareness"][self.state["id"] - 1] = self.env.now
self.state["sentiment"] = self.state["id"]

View File

@@ -11,19 +11,15 @@ import inspect
import types
import textwrap
import networkx as nx
import warnings
import sys
from typing import Any
from mesa import Agent as MesaAgent
from mesa import Agent as MesaAgent, Model
from typing import Dict, List
from .. import serialization, utils, time, config
def as_node(agent):
if isinstance(agent, BaseAgent):
return agent.id
return agent
from .. import serialization, network, utils, time, config
IGNORED_FIELDS = ("model", "logger")
@@ -81,7 +77,7 @@ class MetaAgent(ABCMeta):
else:
defaults[attr] = copy(func)
return super().__new__(mcls=mcls, name=name, bases=bases, namespace=new_nmspc)
return super().__new__(mcls, name, bases, new_nmspc)
class BaseAgent(MesaAgent, MutableMapping, metaclass=MetaAgent):
@@ -96,11 +92,7 @@ class BaseAgent(MesaAgent, MutableMapping, metaclass=MetaAgent):
Any attribute that is not preceded by an underscore (`_`) will also be added to its state.
"""
def __init__(self, unique_id, model, name=None, interval=None, **kwargs):
# Check for REQUIRED arguments
# Initialize agent parameters
if isinstance(unique_id, MesaAgent):
raise Exception()
def __init__(self, unique_id, model, name=None, init=True, interval=None, **kwargs):
assert isinstance(unique_id, int)
super().__init__(unique_id=unique_id, model=model)
@@ -126,6 +118,11 @@ class BaseAgent(MesaAgent, MutableMapping, metaclass=MetaAgent):
for (k, v) in kwargs.items():
setattr(self, k, v)
if init:
self.init()
def init(self):
pass
def __hash__(self):
return hash(self.unique_id)
@@ -133,9 +130,16 @@ class BaseAgent(MesaAgent, MutableMapping, metaclass=MetaAgent):
def prob(self, probability):
return prob(probability, self.model.random)
@classmethod
def w(cls, **kwargs):
return custom(cls, **kwargs)
# TODO: refactor to clean up mesa compatibility
@property
def id(self):
msg = "This attribute is deprecated. Use `unique_id` instead"
warnings.warn(msg, DeprecationWarning)
print(msg, file=sys.stderr)
return self.unique_id
@classmethod
@@ -185,7 +189,11 @@ class BaseAgent(MesaAgent, MutableMapping, metaclass=MetaAgent):
return it
def get(self, key, default=None):
return self[key] if key in self else default
if key in self:
return self[key]
elif key in self.model:
return self.model[key]
return default
@property
def now(self):
@@ -195,8 +203,10 @@ class BaseAgent(MesaAgent, MutableMapping, metaclass=MetaAgent):
# No environment
return None
def die(self):
self.info(f"agent dying")
def die(self, msg=None):
if msg:
self.info("Agent dying:", msg)
self.debug(f"agent dying")
self.alive = False
try:
self.model.schedule.remove(self)
@@ -205,14 +215,16 @@ class BaseAgent(MesaAgent, MutableMapping, metaclass=MetaAgent):
return time.NEVER
def step(self):
raise NotImplementedError("Agent must implement step method")
def _check_alive(self):
if not self.alive:
raise time.DeadAgent(self.unique_id)
return super().step() or time.Delta(self.interval)
def log(self, message, *args, level=logging.INFO, **kwargs):
def log(self, *message, level=logging.INFO, **kwargs):
if not self.logger.isEnabledFor(level):
return
message = message + " ".join(str(i) for i in args)
message = " ".join(str(i) for i in message)
message = "[@{:>4}]\t{:>10}: {}".format(self.now, repr(self), message)
for k, v in kwargs:
message += " {k}={v} ".format(k, v)
@@ -385,7 +397,7 @@ class AgentView(Mapping, Set):
def filter_agents(
agents,
agents: dict,
*id_args,
unique_id=None,
state_id=None,
@@ -414,7 +426,7 @@ def filter_agents(
if ids:
f = (agents[aid] for aid in ids if aid in agents)
else:
f = (a for a in agents.values())
f = agents.values()
if state_id is not None and not isinstance(state_id, (tuple, list)):
state_id = tuple([state_id])
@@ -564,9 +576,9 @@ def _from_fixed(
def _from_distro(
distro: List[config.AgentDistro],
n: int,
topology: str,
default: config.SingleAgentConfig,
random,
topology: str = None
) -> List[Dict[str, Any]]:
agents = []
@@ -630,14 +642,22 @@ def _from_distro(
from .network_agents import *
from .fsm import *
from .evented import *
from typing import Optional
class Agent(NetworkAgent, FSM, EventedAgent):
"""Default agent class, has both network and event capabilities"""
from ..environment import NetworkEnvironment
from .BassModel import *
from .BigMarketModel import *
from .IndependentCascadeModel import *
from .ModelM2 import *
from .SentimentCorrelationModel import *
from .SISaModel import *
from .CounterModel import *
try:
import scipy
from .Geo import Geo
@@ -645,3 +665,8 @@ except ImportError:
import sys
print("Could not load the Geo Agent, scipy is not installed", file=sys.stderr)
def custom(cls, **kwargs):
"""Create a new class from a template class and keyword arguments"""
return type(cls.__name__, (cls,), kwargs)

View File

@@ -1,57 +1,77 @@
from . import BaseAgent
from ..events import Message, Tell, Ask, Reply, TimedOut
from ..time import Cond
from ..events import Message, Tell, Ask, TimedOut
from ..time import BaseCond
from functools import partial
from collections import deque
class Evented(BaseAgent):
class ReceivedOrTimeout(BaseCond):
def __init__(
self, agent, expiration=None, timeout=None, check=True, ignore=False, **kwargs
):
if expiration is None:
if timeout is not None:
expiration = agent.now + timeout
self.expiration = expiration
self.ignore = ignore
self.check = check
super().__init__(**kwargs)
def expired(self, time):
return self.expiration and self.expiration < time
def ready(self, agent, time):
return len(agent._inbox) or self.expired(time)
def return_value(self, agent):
if not self.ignore and self.expired(agent.now):
raise TimedOut("No messages received")
if self.check:
agent.check_messages()
return None
def schedule_next(self, time, delta, first=False):
if self._delta is not None:
delta = self._delta
return (time + delta, self)
def __repr__(self):
return f"ReceivedOrTimeout(expires={self.expiration})"
class EventedAgent(BaseAgent):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self._inbox = deque()
self._received = 0
self._processed = 0
def on_receive(self, *args, **kwargs):
pass
def received(self, expiration=None, timeout=None):
current = self._received
if expiration is None:
expiration = float('inf') if timeout is None else self.now + timeout
def received(self, *args, **kwargs):
return ReceivedOrTimeout(self, *args, **kwargs)
if expiration < self.now:
raise ValueError("Invalid expiration time")
def tell(self, msg, sender=None):
self._inbox.append(Tell(timestamp=self.now, payload=msg, sender=sender))
def ready(agent):
return agent._received > current or agent.now >= expiration
def value(agent):
if agent.now > expiration:
raise TimedOut("No message received")
c = Cond(func=ready, return_func=value)
c._checked = True
return c
def tell(self, msg, sender):
self._received += 1
self._inbox.append(Tell(payload=msg, sender=sender))
def ask(self, msg, timeout=None):
self._received += 1
ask = Ask(payload=msg)
def ask(self, msg, timeout=None, **kwargs):
ask = Ask(timestamp=self.now, payload=msg, sender=self)
self._inbox.append(ask)
expiration = float('inf') if timeout is None else self.now + timeout
return ask.replied(expiration=expiration)
expiration = float("inf") if timeout is None else self.now + timeout
return ask.replied(expiration=expiration, **kwargs)
def check_messages(self):
changed = False
while self._inbox:
msg = self._inbox.popleft()
self._processed += 1
if msg.expired(self.now):
continue
changed = True
reply = self.on_receive(msg.payload, sender=msg.sender)
if isinstance(msg, Ask):
msg.reply = reply
return changed
Evented = EventedAgent

View File

@@ -1,10 +1,11 @@
from . import MetaAgent, BaseAgent
from ..time import Delta
from functools import partial, wraps
import inspect
def state(name=None):
def state(name=None, default=False):
def decorator(func, name=None):
"""
A state function should return either a state id, or a tuple (state_id, when)
@@ -38,10 +39,8 @@ def state(name=None):
self._last_return = None
self._last_except = None
func.id = name or func.__name__
func.is_default = False
func.is_default = default
return func
if callable(name):
@@ -87,8 +86,8 @@ class MetaFSM(MetaAgent):
class FSM(BaseAgent, metaclass=MetaFSM):
def __init__(self, **kwargs):
super(FSM, self).__init__(**kwargs)
def __init__(self, init=True, **kwargs):
super().__init__(**kwargs, init=False)
if not hasattr(self, "state_id"):
if not self._default_state:
raise ValueError(
@@ -97,12 +96,19 @@ class FSM(BaseAgent, metaclass=MetaFSM):
self.state_id = self._default_state.id
self._coroutine = None
self.default_interval = Delta(self.model.interval)
self._set_state(self.state_id)
if init:
self.init()
@classmethod
def states(cls):
return list(cls._states.keys())
def step(self):
self.debug(f"Agent {self.unique_id} @ state {self.state_id}")
default_interval = super().step()
self._check_alive()
next_state = self._states[self.state_id](self)
when = None
@@ -122,7 +128,7 @@ class FSM(BaseAgent, metaclass=MetaFSM):
if next_state is not None:
self._set_state(next_state)
return when or default_interval
return when or self.default_interval
def _set_state(self, state, when=None):
if hasattr(state, "id"):
@@ -134,8 +140,8 @@ class FSM(BaseAgent, metaclass=MetaFSM):
self.model.schedule.add(self, when=when)
return state
def die(self):
return self.dead, super().die()
def die(self, *args, **kwargs):
return self.dead, super().die(*args, **kwargs)
@state
def dead(self):

View File

@@ -2,20 +2,37 @@ from . import BaseAgent
class NetworkAgent(BaseAgent):
def __init__(self, *args, topology, node_id, **kwargs):
super().__init__(*args, **kwargs)
def __init__(self, *args, topology=None, init=True, node_id=None, **kwargs):
super().__init__(*args, init=False, **kwargs)
assert topology is not None
assert node_id is not None
self.G = topology
self.G = topology or self.model.G
assert self.G
if node_id is None:
nodes = self.random.choices(list(self.G.nodes), k=len(self.G))
for n_id in nodes:
if "agent" not in self.G.nodes[n_id] or self.G.nodes[n_id]["agent"] is None:
node_id = n_id
break
else:
node_id = len(self.G)
self.info(f"All nodes ({len(self.G)}) have an agent assigned, adding a new node to the graph for agent {self.unique_id}")
self.G.add_node(node_id)
assert node_id is not None
self.G.nodes[node_id]["agent"] = self
self.node_id = node_id
if init:
self.init()
def count_neighbors(self, state_id=None, **kwargs):
return len(self.get_neighbors(state_id=state_id, **kwargs))
if init:
self.init()
def iter_neighbors(self, **kwargs):
return self.iter_agents(limit_neighbors=True, **kwargs)
def get_neighbors(self, **kwargs):
return list(self.iter_agents(limit_neighbors=True, **kwargs))
return list(self.iter_neighbors(**kwargs))
@property
def node(self):
@@ -23,20 +40,18 @@ class NetworkAgent(BaseAgent):
def iter_agents(self, unique_id=None, *, limit_neighbors=False, **kwargs):
unique_ids = None
if isinstance(unique_id, list):
unique_ids = set(unique_id)
elif unique_id is not None:
unique_ids = set(
[
unique_id,
]
)
if unique_ids is not None:
try:
unique_ids = set(unique_id)
except TypeError:
unique_ids = set([unique_id])
if limit_neighbors:
neighbor_ids = set()
for node_id in self.G.neighbors(self.node_id):
if self.G.nodes[node_id].get("agent") is not None:
neighbor_ids.add(node_id)
agent = self.G.nodes[node_id].get("agent")
if agent is not None:
neighbor_ids.add(agent.unique_id)
if unique_ids:
unique_ids = unique_ids & neighbor_ids
else:
@@ -54,7 +69,7 @@ class NetworkAgent(BaseAgent):
return G
def remove_node(self):
print(f"Removing node for {self.unique_id}: {self.node_id}")
self.debug(f"Removing node for {self.unique_id}: {self.node_id}")
self.G.remove_node(self.node_id)
self.node_id = None
@@ -80,3 +95,6 @@ class NetworkAgent(BaseAgent):
if remove:
self.remove_node()
return super().die()
NetAgent = NetworkAgent

Some files were not shown because too many files have changed in this diff Show More