Merge branch 'mesa'
@ -20,7 +20,7 @@ docker:
|
|||||||
test:
|
test:
|
||||||
tags:
|
tags:
|
||||||
- docker
|
- docker
|
||||||
image: python:3.7
|
image: python:3.8
|
||||||
stage: test
|
stage: test
|
||||||
script:
|
script:
|
||||||
- pip install -r requirements.txt -r test-requirements.txt
|
- pip install -r requirements.txt -r test-requirements.txt
|
||||||
@ -31,7 +31,7 @@ push_pypi:
|
|||||||
- tags
|
- tags
|
||||||
tags:
|
tags:
|
||||||
- docker
|
- docker
|
||||||
image: python:3.7
|
image: python:3.8
|
||||||
stage: publish
|
stage: publish
|
||||||
script:
|
script:
|
||||||
- echo $CI_COMMIT_TAG > soil/VERSION
|
- echo $CI_COMMIT_TAG > soil/VERSION
|
||||||
@ -44,7 +44,7 @@ check_pypi:
|
|||||||
- tags
|
- tags
|
||||||
tags:
|
tags:
|
||||||
- docker
|
- docker
|
||||||
image: python:3.7
|
image: python:3.8
|
||||||
stage: check_published
|
stage: check_published
|
||||||
script:
|
script:
|
||||||
- pip install soil==$CI_COMMIT_TAG
|
- pip install soil==$CI_COMMIT_TAG
|
||||||
|
25
CHANGELOG.md
@ -3,7 +3,30 @@ All notable changes to this project will be documented in this file.
|
|||||||
|
|
||||||
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
|
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
|
||||||
|
|
||||||
## [UNRELEASED]
|
## [1.0 UNRELEASED]
|
||||||
|
|
||||||
|
Version 1.0 introduced multiple changes, especially on the `Simulation` class and anything related to how configuration is handled.
|
||||||
|
For an explanation of the general changes in version 1.0, please refer to the file `docs/notes_v1.0.rst`.
|
||||||
|
|
||||||
|
### Added
|
||||||
|
* A modular set of classes for environments/models. Now the ability to configure the agents through an agent definition and a topology through a network configuration is split into two classes (`soil.agents.BaseEnvironment` for agents, `soil.agents.NetworkEnvironment` to add topology).
|
||||||
|
* Environments now have a class method to make them easier to use without a simulation`.run`. Notice that this is different from `run_model`, which is an instance method.
|
||||||
|
* Ability to run simulations using mesa models
|
||||||
|
* The `soil.exporters` module to export the results of datacollectors (`model.datacollector`) into files at the end of trials/simulations
|
||||||
|
* Agents can now have generators as a step function or a state. They work similar to normal functions, with one caveat in the case of `FSM`: only `time` values (or None) can be yielded, not a state. This is because the state will not change, it will be resumed after the yield, at the appropriate time. The return value *can* be a state, or a `(state, time)` tuple, just like in normal states.
|
||||||
|
* Simulations can now specify a `matrix` with possible values for every simulation parameter. The final parameters will be calculated based on the `parameters` used and a cartesian product (i.e., all possible combinations) of each parameter.
|
||||||
|
* Simple debugging capabilities in `soil.debugging`, with a custom `pdb.Debugger` subclass that exposes commands to list agents and their status and set breakpoints on states (for FSM agents). Try it with `soil --debug <simulation file>`
|
||||||
|
### Changed
|
||||||
|
* Configuration schema (`Simulation`) is very simplified. All simulations should be checked
|
||||||
|
* Model / environment variables are expected (but not enforced) to be a single value. This is done to more closely align with mesa
|
||||||
|
* `Exporter.iteration_end` now takes two parameters: `env` (same as before) and `params` (specific parameters for this environment). We considered including a `parameters` attribute in the environment, but this would not be compatible with mesa.
|
||||||
|
* `num_trials` renamed to `iterations`
|
||||||
|
* General renaming of `trial` to `iteration`, to work better with `mesa`
|
||||||
|
* `model_parameters` renamed to `parameters` in simulation
|
||||||
|
* Simulation results for every iteration of a simulation with the same name are stored in a single `sqlite` database
|
||||||
|
|
||||||
|
### Removed
|
||||||
|
* Any `tsih` and `History` integration in the main classes. To record the state of environments/agents, just use a datacollector. In some cases this may be slower or consume more memory than the previous system. However, few cases actually used the full potential of the history, and it came at the cost of unnecessary complexity and worse performance for the majority of cases.
|
||||||
|
|
||||||
## [0.20.8]
|
## [0.20.8]
|
||||||
### Changed
|
### Changed
|
||||||
|
73
README.md
@ -1,10 +1,65 @@
|
|||||||
# [SOIL](https://github.com/gsi-upm/soil)
|
# [SOIL](https://github.com/gsi-upm/soil)
|
||||||
|
|
||||||
|
|
||||||
Soil is an extensible and user-friendly Agent-based Social Simulator for Social Networks.
|
Soil is an extensible and user-friendly Agent-based Social Simulator for Social Networks.
|
||||||
Learn how to run your own simulations with our [documentation](http://soilsim.readthedocs.io).
|
Learn how to run your own simulations with our [documentation](http://soilsim.readthedocs.io).
|
||||||
|
|
||||||
Follow our [tutorial](examples/tutorial/soil_tutorial.ipynb) to develop your own agent models.
|
Follow our [tutorial](examples/tutorial/soil_tutorial.ipynb) to develop your own agent models.
|
||||||
|
|
||||||
|
> **Warning**
|
||||||
|
> Soil 1.0 introduced many fundamental changes. Check the [documention on how to update your simulations to work with newer versions](docs/notes_v1.0.rst)
|
||||||
|
|
||||||
|
## Features
|
||||||
|
|
||||||
|
* Integration with (social) networks (through `networkx`)
|
||||||
|
* Convenience functions and methods to easily assign agents to your model (and optionally to its network):
|
||||||
|
* Following a given distribution (e.g., 2 agents of type `Foo`, 10% of the network should be agents of type `Bar`)
|
||||||
|
* Based on the topology of the network
|
||||||
|
* **Several types of abstractions for agents**:
|
||||||
|
* Finite state machine, where methods can be turned into a state
|
||||||
|
* Network agents, which have convenience methods to access the model's topology
|
||||||
|
* Generator-based agents, whose state is paused though a `yield` and resumed on the next step
|
||||||
|
* **Reporting and data collection**:
|
||||||
|
* Soil models include data collection and record some data by default (# of agents, state of each agent, etc.)
|
||||||
|
* All data collected are exported by default to a SQLite database and a description file
|
||||||
|
* Options to export to other formats, such as CSV, or defining your own exporters
|
||||||
|
* A summary of the data collected is shown in the command line, for easy inspection
|
||||||
|
* **An event-based scheduler**
|
||||||
|
* Agents can be explicit about when their next time/step should be, and not all agents run in every step. This avoids unnecessary computation.
|
||||||
|
* Time intervals between each step are flexible.
|
||||||
|
* There are primitives to specify when the next execution of an agent should be (or conditions)
|
||||||
|
* **Actor-inspired** message-passing
|
||||||
|
* A simulation runner (`soil.Simulation`) that can:
|
||||||
|
* Run models in parallel
|
||||||
|
* Save results to different formats
|
||||||
|
* Simulation configuration files
|
||||||
|
* A command line interface (`soil`), to quickly run simulations with different parameters
|
||||||
|
* An integrated debugger (`soil --debug`) with custom functions to print agent states and break at specific states
|
||||||
|
|
||||||
|
## Mesa compatibility
|
||||||
|
|
||||||
|
SOIL has been redesigned to integrate well with [Mesa](https://github.com/projectmesa/mesa).
|
||||||
|
For instance, it should be possible to run a `mesa.Model` models using a `soil.Simulation` and the `soil` CLI, or to integrate the `soil.TimedActivation` scheduler on a `mesa.Model`.
|
||||||
|
|
||||||
|
Note that some combinations of `mesa` and `soil` components, while technically possible, are much less useful or might yield surprising results.
|
||||||
|
For instance, you may add any `soil.agent` agent on a regular `mesa.Model` with a vanilla scheduler from `mesa.time`.
|
||||||
|
But in that case the agents will not get any of the advanced event-based scheduling, and most agent behaviors that depend on that may not work.
|
||||||
|
|
||||||
|
|
||||||
|
## Changes in version 0.3
|
||||||
|
|
||||||
|
Version 0.3 came packed with many changes to provide much better integration with MESA.
|
||||||
|
For a long time, we tried to keep soil backwards-compatible, but it turned out to be a big endeavour and the resulting code was less readable.
|
||||||
|
This translates to harder maintenance and a worse experience for newcomers.
|
||||||
|
In the end, we decided to make some breaking changes.
|
||||||
|
|
||||||
|
If you have an older Soil simulation, you have two options:
|
||||||
|
|
||||||
|
* Update the necessary configuration files and code. You may use the examples in the `examples` folder for reference, as well as the documentation.
|
||||||
|
* Keep using a previous `soil` version.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
## Citation
|
## Citation
|
||||||
|
|
||||||
|
|
||||||
@ -31,24 +86,6 @@ If you use Soil in your research, don't forget to cite this paper:
|
|||||||
|
|
||||||
```
|
```
|
||||||
|
|
||||||
## Mesa compatibility
|
|
||||||
|
|
||||||
Soil is in the process of becoming fully compatible with MESA.
|
|
||||||
As of this writing,
|
|
||||||
|
|
||||||
This is a non-exhaustive list of tasks to achieve compatibility:
|
|
||||||
|
|
||||||
* Environments.agents and mesa.Agent.agents are not the same. env is a property, and it only takes into account network and environment agents. Might rename environment_agents to other_agents or sth like that
|
|
||||||
- [ ] Integrate `soil.Simulation` with mesa's runners:
|
|
||||||
- [ ] `soil.Simulation` could mimic/become a `mesa.batchrunner`
|
|
||||||
- [ ] Integrate `soil.Environment` with `mesa.Model`:
|
|
||||||
- [x] `Soil.Environment` inherits from `mesa.Model`
|
|
||||||
- [x] `Soil.Environment` includes a Mesa-like Scheduler (see the `soil.time` module.
|
|
||||||
- [ ] Integrate `soil.Agent` with `mesa.Agent`:
|
|
||||||
- [x] Rename agent.id to unique_id?
|
|
||||||
- [x] mesa agents can be used in soil simulations (see `examples/mesa`)
|
|
||||||
- [ ] Document the new APIs and usage
|
|
||||||
|
|
||||||
@Copyright GSI - Universidad Politécnica de Madrid 2017-2021
|
@Copyright GSI - Universidad Politécnica de Madrid 2017-2021
|
||||||
|
|
||||||
[![SOIL](logo_gsi.png)](https://www.gsi.upm.es)
|
[![SOIL](logo_gsi.png)](https://www.gsi.upm.es)
|
||||||
|
@ -1,241 +0,0 @@
|
|||||||
Configuring a simulation
|
|
||||||
------------------------
|
|
||||||
|
|
||||||
There are two ways to configure a simulation: programmatically and with a configuration file.
|
|
||||||
In both cases, the parameters used are the same.
|
|
||||||
The advantage of a configuration file is that it is a clean declarative description, and it makes it easier to reproduce.
|
|
||||||
|
|
||||||
Simulation configuration files can be formatted in ``json`` or ``yaml`` and they define all the parameters of a simulation.
|
|
||||||
Here's an example (``example.yml``).
|
|
||||||
|
|
||||||
.. literalinclude:: example.yml
|
|
||||||
:language: yaml
|
|
||||||
|
|
||||||
|
|
||||||
This example configuration will run three trials (``num_trials``) of a simulation containing a randomly generated network (``network_params``).
|
|
||||||
The 100 nodes in the network will be SISaModel agents (``network_agents.agent_type``), which is an agent behavior that is included in Soil.
|
|
||||||
10% of the agents (``weight=1``) will start in the content state, 10% in the discontent state, and the remaining 80% (``weight=8``) in the neutral state.
|
|
||||||
All agents will have access to the environment (``environment_params``), which only contains one variable, ``prob_infected``.
|
|
||||||
The state of the agents will be updated every 2 seconds (``interval``).
|
|
||||||
|
|
||||||
Now run the simulation with the command line tool:
|
|
||||||
|
|
||||||
.. code:: bash
|
|
||||||
|
|
||||||
soil example.yml
|
|
||||||
|
|
||||||
Once the simulation finishes, its results will be stored in a folder named ``MyExampleSimulation``.
|
|
||||||
Three types of objects are saved by default: a pickle of the simulation; a ``YAML`` representation of the simulation (which can be used to re-launch it); and for every trial, a ``sqlite`` file with the content of the state of every network node and the environment parameters at every step of the simulation.
|
|
||||||
|
|
||||||
|
|
||||||
.. code::
|
|
||||||
|
|
||||||
soil_output
|
|
||||||
└── MyExampleSimulation
|
|
||||||
├── MyExampleSimulation.dumped.yml
|
|
||||||
├── MyExampleSimulation.simulation.pickle
|
|
||||||
├── MyExampleSimulation_trial_0.db.sqlite
|
|
||||||
├── MyExampleSimulation_trial_1.db.sqlite
|
|
||||||
└── MyExampleSimulation_trial_2.db.sqlite
|
|
||||||
|
|
||||||
|
|
||||||
You may also ask soil to export the states in a ``csv`` file, and the network in gephi format (``gexf``).
|
|
||||||
|
|
||||||
Network
|
|
||||||
=======
|
|
||||||
|
|
||||||
The network topology for the simulation can be loaded from an existing network file or generated with one of the random network generation methods from networkx.
|
|
||||||
|
|
||||||
Loading a network
|
|
||||||
#################
|
|
||||||
|
|
||||||
To load an existing network, specify its path in the configuration:
|
|
||||||
|
|
||||||
.. code:: yaml
|
|
||||||
|
|
||||||
---
|
|
||||||
network_params:
|
|
||||||
path: /tmp/mynetwork.gexf
|
|
||||||
|
|
||||||
Soil will try to guess what networkx method to use to read the file based on its extension.
|
|
||||||
However, we only test using ``gexf`` files.
|
|
||||||
|
|
||||||
For simple networks, you may also include them in the configuration itself using , using the ``topology`` parameter like so:
|
|
||||||
|
|
||||||
.. code:: yaml
|
|
||||||
|
|
||||||
---
|
|
||||||
topology:
|
|
||||||
nodes:
|
|
||||||
- id: First
|
|
||||||
- id: Second
|
|
||||||
links:
|
|
||||||
- source: First
|
|
||||||
target: Second
|
|
||||||
|
|
||||||
|
|
||||||
Generating a random network
|
|
||||||
###########################
|
|
||||||
|
|
||||||
To generate a random network using one of networkx's built-in methods, specify the `graph generation algorithm <https://networkx.github.io/documentation/development/reference/generators.html>`_ and other parameters.
|
|
||||||
For example, the following configuration is equivalent to :code:`nx.complete_graph(n=100)`:
|
|
||||||
|
|
||||||
.. code:: yaml
|
|
||||||
|
|
||||||
network_params:
|
|
||||||
generator: complete_graph
|
|
||||||
n: 100
|
|
||||||
|
|
||||||
Environment
|
|
||||||
============
|
|
||||||
The environment is the place where the shared state of the simulation is stored.
|
|
||||||
For instance, the probability of disease outbreak.
|
|
||||||
The configuration file may specify the initial value of the environment parameters:
|
|
||||||
|
|
||||||
.. code:: yaml
|
|
||||||
|
|
||||||
environment_params:
|
|
||||||
daily_probability_of_earthquake: 0.001
|
|
||||||
number_of_earthquakes: 0
|
|
||||||
|
|
||||||
All agents have access to the environment parameters.
|
|
||||||
|
|
||||||
In some scenarios, it is useful to have a custom environment, to provide additional methods or to control the way agents update environment state.
|
|
||||||
For example, if our agents play the lottery, the environment could provide a method to decide whether the agent wins, instead of leaving it to the agent.
|
|
||||||
|
|
||||||
|
|
||||||
Agents
|
|
||||||
======
|
|
||||||
Agents are a way of modelling behavior.
|
|
||||||
Agents can be characterized with two variables: agent type (``agent_type``) and state.
|
|
||||||
Only one agent is executed at a time (generally, every ``interval`` seconds), and it has access to its state and the environment parameters.
|
|
||||||
Through the environment, it can access the network topology and the state of other agents.
|
|
||||||
|
|
||||||
There are three three types of agents according to how they are added to the simulation: network agents and environment agent.
|
|
||||||
|
|
||||||
Network Agents
|
|
||||||
##############
|
|
||||||
Network agents are attached to a node in the topology.
|
|
||||||
The configuration file allows you to specify how agents will be mapped to topology nodes.
|
|
||||||
|
|
||||||
The simplest way is to specify a single type of agent.
|
|
||||||
Hence, every node in the network will be associated to an agent of that type.
|
|
||||||
|
|
||||||
.. code:: yaml
|
|
||||||
|
|
||||||
agent_type: SISaModel
|
|
||||||
|
|
||||||
It is also possible to add more than one type of agent to the simulation, and to control the ratio of each type (using the ``weight`` property).
|
|
||||||
For instance, with following configuration, it is five times more likely for a node to be assigned a CounterModel type than a SISaModel type.
|
|
||||||
|
|
||||||
.. code:: yaml
|
|
||||||
|
|
||||||
network_agents:
|
|
||||||
- agent_type: SISaModel
|
|
||||||
weight: 1
|
|
||||||
- agent_type: CounterModel
|
|
||||||
weight: 5
|
|
||||||
|
|
||||||
The third option is to specify the type of agent on the node itself, e.g.:
|
|
||||||
|
|
||||||
|
|
||||||
.. code:: yaml
|
|
||||||
|
|
||||||
topology:
|
|
||||||
nodes:
|
|
||||||
- id: first
|
|
||||||
agent_type: BaseAgent
|
|
||||||
states:
|
|
||||||
first:
|
|
||||||
agent_type: SISaModel
|
|
||||||
|
|
||||||
|
|
||||||
This would also work with a randomly generated network:
|
|
||||||
|
|
||||||
|
|
||||||
.. code:: yaml
|
|
||||||
|
|
||||||
network:
|
|
||||||
generator: complete
|
|
||||||
n: 5
|
|
||||||
agent_type: BaseAgent
|
|
||||||
states:
|
|
||||||
- agent_type: SISaModel
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
In addition to agent type, you may add a custom initial state to the distribution.
|
|
||||||
This is very useful to add the same agent type with different states.
|
|
||||||
e.g., to populate the network with SISaModel, roughly 10% of them with a discontent state:
|
|
||||||
|
|
||||||
.. code:: yaml
|
|
||||||
|
|
||||||
network_agents:
|
|
||||||
- agent_type: SISaModel
|
|
||||||
weight: 9
|
|
||||||
state:
|
|
||||||
id: neutral
|
|
||||||
- agent_type: SISaModel
|
|
||||||
weight: 1
|
|
||||||
state:
|
|
||||||
id: discontent
|
|
||||||
|
|
||||||
Lastly, the configuration may include initial state for one or more nodes.
|
|
||||||
For instance, to add a state for the two nodes in this configuration:
|
|
||||||
|
|
||||||
.. code:: yaml
|
|
||||||
|
|
||||||
agent_type: SISaModel
|
|
||||||
network:
|
|
||||||
generator: complete_graph
|
|
||||||
n: 2
|
|
||||||
states:
|
|
||||||
- id: content
|
|
||||||
- id: discontent
|
|
||||||
|
|
||||||
|
|
||||||
Or to add state only to specific nodes (by ``id``).
|
|
||||||
For example, to apply special skills to Linux Torvalds in a simulation:
|
|
||||||
|
|
||||||
.. literalinclude:: ../examples/torvalds.yml
|
|
||||||
:language: yaml
|
|
||||||
|
|
||||||
|
|
||||||
Environment Agents
|
|
||||||
##################
|
|
||||||
In addition to network agents, more agents can be added to the simulation.
|
|
||||||
These agents are programmed in much the same way as network agents, the only difference is that they will not be assigned to network nodes.
|
|
||||||
|
|
||||||
|
|
||||||
.. code::
|
|
||||||
|
|
||||||
environment_agents:
|
|
||||||
- agent_type: MyAgent
|
|
||||||
state:
|
|
||||||
mood: happy
|
|
||||||
- agent_type: DummyAgent
|
|
||||||
|
|
||||||
|
|
||||||
You may use environment agents to model events that a normal agent cannot control, such as natural disasters or chance.
|
|
||||||
They are also useful to add behavior that has little to do with the network and the interactions within that network.
|
|
||||||
|
|
||||||
Templating
|
|
||||||
==========
|
|
||||||
|
|
||||||
Sometimes, it is useful to parameterize a simulation and run it over a range of values in order to compare each run and measure the effect of those parameters in the simulation.
|
|
||||||
For instance, you may want to run a simulation with different agent distributions.
|
|
||||||
|
|
||||||
This can be done in Soil using **templates**.
|
|
||||||
A template is a configuration where some of the values are specified with a variable.
|
|
||||||
e.g., ``weight: "{{ var1 }}"`` instead of ``weight: 1``.
|
|
||||||
There are two types of variables, depending on how their values are decided:
|
|
||||||
|
|
||||||
* Fixed. A list of values is provided, and a new simulation is run for each possible value. If more than a variable is given, a new simulation will be run per combination of values.
|
|
||||||
* Bounded/Sampled. The bounds of the variable are provided, along with a sampler method, which will be used to compute all the configuration combinations.
|
|
||||||
|
|
||||||
When fixed and bounded variables are mixed, Soil generates a new configuration per combination of fixed values and bounded values.
|
|
||||||
|
|
||||||
Here is an example with a single fixed variable and two bounded variable:
|
|
||||||
|
|
||||||
.. literalinclude:: ../examples/template.yml
|
|
||||||
:language: yaml
|
|
@ -3,24 +3,29 @@ name: MyExampleSimulation
|
|||||||
max_time: 50
|
max_time: 50
|
||||||
num_trials: 3
|
num_trials: 3
|
||||||
interval: 2
|
interval: 2
|
||||||
network_params:
|
model_params:
|
||||||
|
topology:
|
||||||
|
params:
|
||||||
generator: barabasi_albert_graph
|
generator: barabasi_albert_graph
|
||||||
n: 100
|
n: 100
|
||||||
m: 2
|
m: 2
|
||||||
network_agents:
|
agents:
|
||||||
- agent_type: SISaModel
|
distribution:
|
||||||
weight: 1
|
- agent_class: SISaModel
|
||||||
|
topology: True
|
||||||
|
ratio: 0.1
|
||||||
state:
|
state:
|
||||||
id: content
|
state_id: content
|
||||||
- agent_type: SISaModel
|
- agent_class: SISaModel
|
||||||
weight: 1
|
topology: True
|
||||||
|
ratio: .1
|
||||||
state:
|
state:
|
||||||
id: discontent
|
state_id: discontent
|
||||||
- agent_type: SISaModel
|
- agent_class: SISaModel
|
||||||
weight: 8
|
topology: True
|
||||||
|
ratio: 0.8
|
||||||
state:
|
state:
|
||||||
id: neutral
|
state_id: neutral
|
||||||
environment_params:
|
|
||||||
prob_infect: 0.075
|
prob_infect: 0.075
|
||||||
neutral_discontent_spon_prob: 0.1
|
neutral_discontent_spon_prob: 0.1
|
||||||
neutral_discontent_infected_prob: 0.3
|
neutral_discontent_infected_prob: 0.3
|
||||||
|
@ -1,12 +1,20 @@
|
|||||||
.. Soil documentation master file, created by
|
|
||||||
sphinx-quickstart on Tue Apr 25 12:48:56 2017.
|
|
||||||
You can adapt this file completely to your liking, but it should at least
|
|
||||||
contain the root `toctree` directive.
|
|
||||||
|
|
||||||
Welcome to Soil's documentation!
|
Welcome to Soil's documentation!
|
||||||
================================
|
================================
|
||||||
|
|
||||||
Soil is an Agent-based Social Simulator in Python focused on Social Networks.
|
Soil is an opinionated Agent-based Social Simulator in Python focused on Social Networks.
|
||||||
|
|
||||||
|
.. image:: soil.png
|
||||||
|
:width: 80%
|
||||||
|
:align: center
|
||||||
|
|
||||||
|
Soil can be installed through pip (see more details in the :doc:`installation` page):
|
||||||
|
|
||||||
|
.. code:: bash
|
||||||
|
|
||||||
|
pip install soil
|
||||||
|
|
||||||
|
|
||||||
|
To get started developing your own simulations and agent behaviors, check out our :doc:`Tutorial <soil_tutorial>` and the `examples on GitHub <https://github.com/gsi-upm/soil/tree/master/examples>.
|
||||||
|
|
||||||
If you use Soil in your research, do not forget to cite this paper:
|
If you use Soil in your research, do not forget to cite this paper:
|
||||||
|
|
||||||
@ -38,8 +46,6 @@ If you use Soil in your research, do not forget to cite this paper:
|
|||||||
:caption: Learn more about soil:
|
:caption: Learn more about soil:
|
||||||
|
|
||||||
installation
|
installation
|
||||||
quickstart
|
|
||||||
configuration
|
|
||||||
Tutorial <soil_tutorial>
|
Tutorial <soil_tutorial>
|
||||||
|
|
||||||
..
|
..
|
||||||
|
@ -1,7 +1,10 @@
|
|||||||
Installation
|
Installation
|
||||||
------------
|
------------
|
||||||
|
|
||||||
The easiest way to install Soil is through pip, with Python >= 3.4:
|
Through pip
|
||||||
|
===========
|
||||||
|
|
||||||
|
The easiest way to install Soil is through pip, with Python >= 3.8:
|
||||||
|
|
||||||
.. code:: bash
|
.. code:: bash
|
||||||
|
|
||||||
@ -14,6 +17,10 @@ Now test that it worked by running the command line tool
|
|||||||
|
|
||||||
soil --help
|
soil --help
|
||||||
|
|
||||||
|
#or
|
||||||
|
|
||||||
|
python -m soil --help
|
||||||
|
|
||||||
Or, if you're using using soil programmatically:
|
Or, if you're using using soil programmatically:
|
||||||
|
|
||||||
.. code:: python
|
.. code:: python
|
||||||
@ -21,4 +28,38 @@ Or, if you're using using soil programmatically:
|
|||||||
import soil
|
import soil
|
||||||
print(soil.__version__)
|
print(soil.__version__)
|
||||||
|
|
||||||
The latest version can be installed through `GitLab <https://lab.gsi.upm.es/soil/soil.git>`_ or `GitHub <https://github.com/gsi-upm/soil>`_.
|
|
||||||
|
|
||||||
|
Web UI
|
||||||
|
======
|
||||||
|
|
||||||
|
Soil also includes a web server that allows you to upload your simulations, change parameters, and visualize the results, including a timeline of the network.
|
||||||
|
To make it work, you have to install soil like this:
|
||||||
|
|
||||||
|
.. code::
|
||||||
|
|
||||||
|
pip install soil[web]
|
||||||
|
|
||||||
|
Once installed, the soil web UI can be run in two ways:
|
||||||
|
|
||||||
|
.. code::
|
||||||
|
|
||||||
|
soil-web
|
||||||
|
|
||||||
|
# OR
|
||||||
|
|
||||||
|
python -m soil.web
|
||||||
|
|
||||||
|
|
||||||
|
Development
|
||||||
|
===========
|
||||||
|
|
||||||
|
The latest version can be downloaded from `GitHub <https://github.com/gsi-upm/soil>`_ and installed manually:
|
||||||
|
|
||||||
|
.. code:: bash
|
||||||
|
|
||||||
|
git clone https://github.com/gsi-upm/soil
|
||||||
|
cd soil
|
||||||
|
python -m venv .venv
|
||||||
|
source .venv/bin/activate
|
||||||
|
pip install --editable .
|
@ -12,7 +12,7 @@ set BUILDDIR=_build
|
|||||||
set SPHINXPROJ=Soil
|
set SPHINXPROJ=Soil
|
||||||
|
|
||||||
if "%1" == "" goto help
|
if "%1" == "" goto help
|
||||||
|
eE
|
||||||
%SPHINXBUILD% >NUL 2>NUL
|
%SPHINXBUILD% >NUL 2>NUL
|
||||||
if errorlevel 9009 (
|
if errorlevel 9009 (
|
||||||
echo.
|
echo.
|
||||||
|
22
docs/mesa.rst
Normal file
@ -0,0 +1,22 @@
|
|||||||
|
Mesa compatibility
|
||||||
|
------------------
|
||||||
|
|
||||||
|
Soil is in the process of becoming fully compatible with MESA.
|
||||||
|
The idea is to provide a set of modular classes and functions that extend the functionality of mesa, whilst staying compatible.
|
||||||
|
In the end, it should be possible to add regular mesa agents to a soil simulation, or use a soil agent within a mesa simulation/model.
|
||||||
|
|
||||||
|
This is a non-exhaustive list of tasks to achieve compatibility:
|
||||||
|
|
||||||
|
- [ ] Integrate `soil.Simulation` with mesa's runners:
|
||||||
|
- [ ] `soil.Simulation` could mimic/become a `mesa.batchrunner`
|
||||||
|
- [ ] Integrate `soil.Environment` with `mesa.Model`:
|
||||||
|
- [x] `Soil.Environment` inherits from `mesa.Model`
|
||||||
|
- [x] `Soil.Environment` includes a Mesa-like Scheduler (see the `soil.time` module.
|
||||||
|
- [ ] Allow for `mesa.Model` to be used in a simulation.
|
||||||
|
- [ ] Integrate `soil.Agent` with `mesa.Agent`:
|
||||||
|
- [x] Rename agent.id to unique_id?
|
||||||
|
- [x] mesa agents can be used in soil simulations (see `examples/mesa`)
|
||||||
|
- [ ] Provide examples
|
||||||
|
- [ ] Using mesa modules in a soil simulation
|
||||||
|
- [ ] Using soil modules in a mesa simulation
|
||||||
|
- [ ] Document the new APIs and usage
|
35
docs/notes_v1.0.rst
Normal file
@ -0,0 +1,35 @@
|
|||||||
|
What are the main changes in version 1.0?
|
||||||
|
#########################################
|
||||||
|
|
||||||
|
Version 1.0 is a major rewrite of the Soil system, focused on simplifying the API, aligning it with Mesa, and making it easier to use.
|
||||||
|
Unfortunately, this comes at the cost of backwards compatibility.
|
||||||
|
|
||||||
|
We drew several lessons from the previous version of Soil, and tried to address them in this version.
|
||||||
|
Mainly:
|
||||||
|
|
||||||
|
- The split between simulation configuration and simulation code was overly complicated for most use cases. As a result, most users ended up reusing configuration.
|
||||||
|
- Storing **all** the simulation data in a database is costly and unnecessary for most use cases. For most use cases, only a handful of variables need to be stored. This fits nicely with Mesa's data collection system.
|
||||||
|
- The API was too complex, and it was difficult to understand how to use it.
|
||||||
|
- Most parts of the API were not aligned with Mesa, which made it difficult to use Mesa's features or to integrate Soil modules with Mesa code, especially for newcomers.
|
||||||
|
- Many parts of the API were tightly coupled, which made it difficult to find bugs, test the system and add new features.
|
||||||
|
|
||||||
|
The 0.30 rewrite should provide a middle ground between Soil's opinionated approach and Mesa's flexibility.
|
||||||
|
The new Soil is less configuration-centric.
|
||||||
|
It aims to provide more modular and convenient functions, most of which can be used in vanilla Mesa.
|
||||||
|
|
||||||
|
How are agents assigned to nodes in the network
|
||||||
|
###############################################
|
||||||
|
|
||||||
|
The constructor of the `NetworkAgent` class has two arguments: `node_id` and `topology`.
|
||||||
|
If `topology` is not provided, it will default to `self.model.topology`.
|
||||||
|
This assignment might err if the model does not have a `topology` attribute, but most Soil environments derive from `NetworkEnvironment`, so they include a topology by default.
|
||||||
|
If `node_id` is not provided, a random node will be selected from the topology, until a node with no agent is found.
|
||||||
|
Then, the `node_id` of that node is assigned to the agent.
|
||||||
|
If no node with no agent is found, a new node is automatically added to the topology.
|
||||||
|
|
||||||
|
|
||||||
|
Can Soil environments include more than one network / topology?
|
||||||
|
###############################################################
|
||||||
|
|
||||||
|
Yes, but each network has to be included manually.
|
||||||
|
Somewhere between 0.20 and 0.30 we included the ability to include multiple networks, but it was deemed too complex and was removed.
|
Before Width: | Height: | Size: 8.3 KiB |
BIN
docs/output_30_0.png
Normal file
After Width: | Height: | Size: 4.7 KiB |
BIN
docs/output_34_0.png
Normal file
After Width: | Height: | Size: 11 KiB |
BIN
docs/output_49_0.png
Normal file
After Width: | Height: | Size: 26 KiB |
BIN
docs/output_50_0.png
Normal file
After Width: | Height: | Size: 31 KiB |
Before Width: | Height: | Size: 29 KiB |
Before Width: | Height: | Size: 33 KiB |
Before Width: | Height: | Size: 23 KiB |
@ -1,93 +0,0 @@
|
|||||||
Quickstart
|
|
||||||
----------
|
|
||||||
|
|
||||||
This section shows how to run your first simulation with Soil.
|
|
||||||
For installation instructions, see :doc:`installation`.
|
|
||||||
|
|
||||||
There are mainly two parts in a simulation: agent classes and simulation configuration.
|
|
||||||
An agent class defines how the agent will behave throughout the simulation.
|
|
||||||
The configuration includes things such as number of agents to use and their type, network topology to use, etc.
|
|
||||||
|
|
||||||
|
|
||||||
.. image:: soil.png
|
|
||||||
:width: 80%
|
|
||||||
:align: center
|
|
||||||
|
|
||||||
|
|
||||||
Soil includes several agent classes in the ``soil.agents`` module, and we will use them in this quickstart.
|
|
||||||
If you are interested in developing your own agents classes, see :doc:`soil_tutorial`.
|
|
||||||
|
|
||||||
Configuration
|
|
||||||
=============
|
|
||||||
To get you started, we will use this configuration (:download:`download the file <quickstart.yml>` directly):
|
|
||||||
|
|
||||||
.. literalinclude:: quickstart.yml
|
|
||||||
:language: yaml
|
|
||||||
|
|
||||||
The agent type used, SISa, is a very simple model.
|
|
||||||
It only has three states (neutral, content and discontent),
|
|
||||||
Its parameters are the probabilities to change from one state to another, either spontaneously or because of contagion from neighboring agents.
|
|
||||||
|
|
||||||
Running the simulation
|
|
||||||
======================
|
|
||||||
|
|
||||||
To see the simulation in action, simply point soil to the configuration, and tell it to store the graph and the history of agent states and environment parameters at every point.
|
|
||||||
|
|
||||||
.. code::
|
|
||||||
|
|
||||||
❯ soil --graph --csv quickstart.yml [13:35:29]
|
|
||||||
INFO:soil:Using config(s): quickstart
|
|
||||||
INFO:soil:Dumping results to soil_output/quickstart : ['csv', 'gexf']
|
|
||||||
INFO:soil:Starting simulation quickstart at 13:35:30.
|
|
||||||
INFO:soil:Starting Simulation quickstart trial 0 at 13:35:30.
|
|
||||||
INFO:soil:Finished Simulation quickstart trial 0 at 13:35:49 in 19.43677067756653 seconds
|
|
||||||
INFO:soil:Starting Dumping simulation quickstart trial 0 at 13:35:49.
|
|
||||||
INFO:soil:Finished Dumping simulation quickstart trial 0 at 13:35:51 in 1.7733407020568848 seconds
|
|
||||||
INFO:soil:Dumping results to soil_output/quickstart
|
|
||||||
INFO:soil:Finished simulation quickstart at 13:35:51 in 21.29862952232361 seconds
|
|
||||||
|
|
||||||
|
|
||||||
The ``CSV`` file should look like this:
|
|
||||||
|
|
||||||
.. code::
|
|
||||||
|
|
||||||
agent_id,t_step,key,value
|
|
||||||
env,0,neutral_discontent_spon_prob,0.05
|
|
||||||
env,0,neutral_discontent_infected_prob,0.1
|
|
||||||
env,0,neutral_content_spon_prob,0.2
|
|
||||||
env,0,neutral_content_infected_prob,0.4
|
|
||||||
env,0,discontent_neutral,0.2
|
|
||||||
env,0,discontent_content,0.05
|
|
||||||
env,0,content_discontent,0.05
|
|
||||||
env,0,variance_d_c,0.05
|
|
||||||
env,0,variance_c_d,0.1
|
|
||||||
|
|
||||||
Results and visualization
|
|
||||||
=========================
|
|
||||||
|
|
||||||
The environment variables are marked as ``agent_id`` env.
|
|
||||||
Th exported values are only stored when they change.
|
|
||||||
To find out how to get every key and value at every point in the simulation, check out the :doc:`soil_tutorial`.
|
|
||||||
|
|
||||||
The dynamic graph is exported as a .gexf file which could be visualized with
|
|
||||||
`Gephi <https://gephi.org/users/download/>`__.
|
|
||||||
Now it is your turn to experiment with the simulation.
|
|
||||||
Change some of the parameters, such as the number of agents, the probability of becoming content, or the type of network, and see how the results change.
|
|
||||||
|
|
||||||
|
|
||||||
Soil also includes a web server that allows you to upload your simulations, change parameters, and visualize the results, including a timeline of the network.
|
|
||||||
To make it work, you have to install soil like this:
|
|
||||||
|
|
||||||
.. code::
|
|
||||||
|
|
||||||
pip install soil[web]
|
|
||||||
|
|
||||||
Once installed, the soil web UI can be run in two ways:
|
|
||||||
|
|
||||||
.. code::
|
|
||||||
|
|
||||||
soil-web
|
|
||||||
|
|
||||||
# OR
|
|
||||||
|
|
||||||
python -m soil.web
|
|
@ -1,30 +0,0 @@
|
|||||||
---
|
|
||||||
name: quickstart
|
|
||||||
num_trials: 1
|
|
||||||
max_time: 1000
|
|
||||||
network_agents:
|
|
||||||
- agent_type: SISaModel
|
|
||||||
state:
|
|
||||||
id: neutral
|
|
||||||
weight: 1
|
|
||||||
- agent_type: SISaModel
|
|
||||||
state:
|
|
||||||
id: content
|
|
||||||
weight: 2
|
|
||||||
network_params:
|
|
||||||
n: 100
|
|
||||||
k: 5
|
|
||||||
p: 0.2
|
|
||||||
generator: newman_watts_strogatz_graph
|
|
||||||
environment_params:
|
|
||||||
neutral_discontent_spon_prob: 0.05
|
|
||||||
neutral_discontent_infected_prob: 0.1
|
|
||||||
neutral_content_spon_prob: 0.2
|
|
||||||
neutral_content_infected_prob: 0.4
|
|
||||||
discontent_neutral: 0.2
|
|
||||||
discontent_content: 0.05
|
|
||||||
content_discontent: 0.05
|
|
||||||
variance_d_c: 0.05
|
|
||||||
variance_c_d: 0.1
|
|
||||||
content_neutral: 0.1
|
|
||||||
standard_variance: 0.1
|
|
@ -1 +1 @@
|
|||||||
ipython==7.31.1
|
ipython>=7.31.1
|
||||||
|
12
docs/soil-vs.rst
Normal file
@ -0,0 +1,12 @@
|
|||||||
|
### MESA
|
||||||
|
|
||||||
|
Starting with version 0.3, Soil has been redesigned to complement Mesa, while remaining compatible with it.
|
||||||
|
That means that every component in Soil (i.e., Models, Environments, etc.) can be mixed with existing mesa components.
|
||||||
|
In fact, there are examples that show how that integration may be used, in the `examples/mesa` folder in the repository.
|
||||||
|
|
||||||
|
Here are some reasons to use Soil instead of plain mesa:
|
||||||
|
|
||||||
|
- Less boilerplate for common scenarios (by some definitions of common)
|
||||||
|
- Functions to automatically populate a topology with an agent distribution (i.e., different ratios of agent class and state)
|
||||||
|
- The `soil.Simulation` class allows you to run multiple instances of the same experiment (i.e., multiple trials with the same parameters but a different randomness seed)
|
||||||
|
- Reporting functions that aggregate multiple
|
@ -1,27 +0,0 @@
|
|||||||
---
|
|
||||||
name: simple
|
|
||||||
group: tests
|
|
||||||
dir_path: "/tmp/"
|
|
||||||
num_trials: 3
|
|
||||||
max_time: 100
|
|
||||||
interval: 1
|
|
||||||
seed: "CompleteSeed!"
|
|
||||||
network_params:
|
|
||||||
generator: complete_graph
|
|
||||||
n: 10
|
|
||||||
network_agents:
|
|
||||||
- agent_type: CounterModel
|
|
||||||
weight: 1
|
|
||||||
state:
|
|
||||||
state_id: 0
|
|
||||||
- agent_type: AggregatedCounter
|
|
||||||
weight: 0.2
|
|
||||||
environment_agents: []
|
|
||||||
environment_class: Environment
|
|
||||||
environment_params:
|
|
||||||
am_i_complete: true
|
|
||||||
default_state:
|
|
||||||
incidents: 0
|
|
||||||
states:
|
|
||||||
- name: 'The first node'
|
|
||||||
- name: 'The second node'
|
|
@ -1,16 +0,0 @@
|
|||||||
---
|
|
||||||
name: custom-generator
|
|
||||||
description: Using a custom generator for the network
|
|
||||||
num_trials: 3
|
|
||||||
max_time: 100
|
|
||||||
interval: 1
|
|
||||||
network_params:
|
|
||||||
generator: mymodule.mygenerator
|
|
||||||
# These are custom parameters
|
|
||||||
n: 10
|
|
||||||
n_edges: 5
|
|
||||||
network_agents:
|
|
||||||
- agent_type: CounterModel
|
|
||||||
weight: 1
|
|
||||||
state:
|
|
||||||
state_id: 0
|
|
39
examples/custom_generator/generator_sim.py
Normal file
@ -0,0 +1,39 @@
|
|||||||
|
from networkx import Graph
|
||||||
|
import random
|
||||||
|
import networkx as nx
|
||||||
|
from soil import Simulation, Environment, CounterModel, parameters
|
||||||
|
|
||||||
|
|
||||||
|
def mygenerator(n=5, n_edges=5):
|
||||||
|
"""
|
||||||
|
Just a simple generator that creates a network with n nodes and
|
||||||
|
n_edges edges. Edges are assigned randomly, only avoiding self loops.
|
||||||
|
"""
|
||||||
|
G = nx.Graph()
|
||||||
|
|
||||||
|
for i in range(n):
|
||||||
|
G.add_node(i)
|
||||||
|
|
||||||
|
for i in range(n_edges):
|
||||||
|
nodes = list(G.nodes)
|
||||||
|
n_in = random.choice(nodes)
|
||||||
|
nodes.remove(n_in) # Avoid loops
|
||||||
|
n_out = random.choice(nodes)
|
||||||
|
G.add_edge(n_in, n_out)
|
||||||
|
return G
|
||||||
|
|
||||||
|
|
||||||
|
class GeneratorEnv(Environment):
|
||||||
|
"""Using a custom generator for the network"""
|
||||||
|
|
||||||
|
generator: parameters.function = staticmethod(mygenerator)
|
||||||
|
|
||||||
|
def init(self):
|
||||||
|
self.create_network(generator=self.generator, n=10, n_edges=5)
|
||||||
|
self.add_agents(CounterModel)
|
||||||
|
|
||||||
|
|
||||||
|
sim = Simulation(model=GeneratorEnv, max_steps=10, interval=1)
|
||||||
|
|
||||||
|
if __name__ == '__main__':
|
||||||
|
sim.run(dump=False)
|
@ -1,27 +0,0 @@
|
|||||||
from networkx import Graph
|
|
||||||
import networkx as nx
|
|
||||||
from random import choice
|
|
||||||
|
|
||||||
def mygenerator(n=5, n_edges=5):
|
|
||||||
'''
|
|
||||||
Just a simple generator that creates a network with n nodes and
|
|
||||||
n_edges edges. Edges are assigned randomly, only avoiding self loops.
|
|
||||||
'''
|
|
||||||
G = nx.Graph()
|
|
||||||
|
|
||||||
for i in range(n):
|
|
||||||
G.add_node(i)
|
|
||||||
|
|
||||||
for i in range(n_edges):
|
|
||||||
nodes = list(G.nodes)
|
|
||||||
n_in = choice(nodes)
|
|
||||||
nodes.remove(n_in) # Avoid loops
|
|
||||||
n_out = choice(nodes)
|
|
||||||
G.add_edge(n_in, n_out)
|
|
||||||
return G
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
@ -1,35 +0,0 @@
|
|||||||
from soil.agents import FSM, state, default_state
|
|
||||||
|
|
||||||
|
|
||||||
class Fibonacci(FSM):
|
|
||||||
'''Agent that only executes in t_steps that are Fibonacci numbers'''
|
|
||||||
|
|
||||||
defaults = {
|
|
||||||
'prev': 1
|
|
||||||
}
|
|
||||||
|
|
||||||
@default_state
|
|
||||||
@state
|
|
||||||
def counting(self):
|
|
||||||
self.log('Stopping at {}'.format(self.now))
|
|
||||||
prev, self['prev'] = self['prev'], max([self.now, self['prev']])
|
|
||||||
return None, self.env.timeout(prev)
|
|
||||||
|
|
||||||
class Odds(FSM):
|
|
||||||
'''Agent that only executes in odd t_steps'''
|
|
||||||
@default_state
|
|
||||||
@state
|
|
||||||
def odds(self):
|
|
||||||
self.log('Stopping at {}'.format(self.now))
|
|
||||||
return None, self.env.timeout(1+self.now%2)
|
|
||||||
|
|
||||||
if __name__ == '__main__':
|
|
||||||
import logging
|
|
||||||
logging.basicConfig(level=logging.INFO)
|
|
||||||
from soil import Simulation
|
|
||||||
s = Simulation(network_agents=[{'ids': [0], 'agent_type': Fibonacci},
|
|
||||||
{'ids': [1], 'agent_type': Odds}],
|
|
||||||
network_params={"generator": "complete_graph", "n": 2},
|
|
||||||
max_time=100,
|
|
||||||
)
|
|
||||||
s.run(dry_run=True)
|
|
41
examples/custom_timeouts/custom_timeouts_sim.py
Normal file
@ -0,0 +1,41 @@
|
|||||||
|
from soil.agents import FSM, state, default_state
|
||||||
|
from soil.time import Delta
|
||||||
|
|
||||||
|
|
||||||
|
class Fibonacci(FSM):
|
||||||
|
"""Agent that only executes in t_steps that are Fibonacci numbers"""
|
||||||
|
prev = 1
|
||||||
|
|
||||||
|
@default_state
|
||||||
|
@state
|
||||||
|
def counting(self):
|
||||||
|
self.log("Stopping at {}".format(self.now))
|
||||||
|
prev, self["prev"] = self["prev"], max([self.now, self["prev"]])
|
||||||
|
return None, Delta(prev)
|
||||||
|
|
||||||
|
|
||||||
|
class Odds(FSM):
|
||||||
|
"""Agent that only executes in odd t_steps"""
|
||||||
|
|
||||||
|
@default_state
|
||||||
|
@state
|
||||||
|
def odds(self):
|
||||||
|
self.log("Stopping at {}".format(self.now))
|
||||||
|
return None, Delta(1 + self.now % 2)
|
||||||
|
|
||||||
|
|
||||||
|
from soil import Environment, Simulation
|
||||||
|
from networkx import complete_graph
|
||||||
|
|
||||||
|
|
||||||
|
class TimeoutsEnv(Environment):
|
||||||
|
def init(self):
|
||||||
|
self.create_network(generator=complete_graph, n=2)
|
||||||
|
self.add_agent(agent_class=Fibonacci, node_id=0)
|
||||||
|
self.add_agent(agent_class=Odds, node_id=1)
|
||||||
|
|
||||||
|
|
||||||
|
sim = Simulation(model=TimeoutsEnv, max_steps=10, interval=1)
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
sim.run(dump=False)
|
9
examples/events_and_messages/README.md
Normal file
@ -0,0 +1,9 @@
|
|||||||
|
This example can be run like with command-line options, like this:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
python cars.py --level DEBUG -e summary --csv
|
||||||
|
#or
|
||||||
|
soil cars.py -e summary
|
||||||
|
```
|
||||||
|
|
||||||
|
This will set the `CSV` (save the agent and model data to a CSV) and `summary` (print the a summary of the data to stdout) exporters, and set the log level to DEBUG.
|
231
examples/events_and_messages/cars_sim.py
Normal file
@ -0,0 +1,231 @@
|
|||||||
|
"""
|
||||||
|
This is an example of a simplified city, where there are Passengers and Drivers that can take those passengers
|
||||||
|
from their location to their desired location.
|
||||||
|
|
||||||
|
An example scenario could play like the following:
|
||||||
|
|
||||||
|
- Drivers start in the `wandering` state, where they wander around the city until they have been assigned a journey
|
||||||
|
- Passenger(1) tells every driver that it wants to request a Journey.
|
||||||
|
- Each driver receives the request.
|
||||||
|
If Driver(2) is interested in providing the Journey, it asks Passenger(1) to confirm that it accepts Driver(2)'s request
|
||||||
|
- When Passenger(1) accepts the request, two things happen:
|
||||||
|
- Passenger(1) changes its state to `driving_home`
|
||||||
|
- Driver(2) starts moving towards the origin of the Journey
|
||||||
|
- Once Driver(2) reaches the origin, it starts moving itself and Passenger(1) to the destination of the Journey
|
||||||
|
- When Driver(2) reaches the destination (carrying Passenger(1) along):
|
||||||
|
- Driver(2) starts wondering again
|
||||||
|
- Passenger(1) dies, and is removed from the simulation
|
||||||
|
- If there are no more passengers available in the simulation, Drivers die
|
||||||
|
"""
|
||||||
|
from __future__ import annotations
|
||||||
|
from typing import Optional
|
||||||
|
from soil import *
|
||||||
|
from soil import events
|
||||||
|
from mesa.space import MultiGrid
|
||||||
|
|
||||||
|
|
||||||
|
# More complex scenarios may use more than one type of message between objects.
|
||||||
|
# A common pattern is to use `enum.Enum` to represent state changes in a request.
|
||||||
|
@dataclass
|
||||||
|
class Journey:
|
||||||
|
"""
|
||||||
|
This represents a request for a journey. Passengers and drivers exchange this object.
|
||||||
|
|
||||||
|
A journey may have a driver assigned or not. If the driver has not been assigned, this
|
||||||
|
object is considered a "request for a journey".
|
||||||
|
"""
|
||||||
|
|
||||||
|
origin: (int, int)
|
||||||
|
destination: (int, int)
|
||||||
|
tip: float
|
||||||
|
|
||||||
|
passenger: Passenger
|
||||||
|
driver: Optional[Driver] = None
|
||||||
|
|
||||||
|
|
||||||
|
class City(EventedEnvironment):
|
||||||
|
"""
|
||||||
|
An environment with a grid where drivers and passengers will be placed.
|
||||||
|
|
||||||
|
The number of drivers and riders is configurable through its parameters:
|
||||||
|
|
||||||
|
:param str n_cars: The total number of drivers to add
|
||||||
|
:param str n_passengers: The number of passengers in the simulation
|
||||||
|
:param list agents: Specific agents to use in the simulation. It overrides the `n_passengers`
|
||||||
|
and `n_cars` params.
|
||||||
|
:param int height: Height of the internal grid
|
||||||
|
:param int width: Width of the internal grid
|
||||||
|
"""
|
||||||
|
n_cars = 1
|
||||||
|
n_passengers = 10
|
||||||
|
height = 100
|
||||||
|
width = 100
|
||||||
|
|
||||||
|
def init(self):
|
||||||
|
self.grid = MultiGrid(width=self.width, height=self.height, torus=False)
|
||||||
|
if not self.agents:
|
||||||
|
self.add_agents(Driver, k=self.n_cars)
|
||||||
|
self.add_agents(Passenger, k=self.n_passengers)
|
||||||
|
|
||||||
|
for agent in self.agents:
|
||||||
|
self.grid.place_agent(agent, (0, 0))
|
||||||
|
self.grid.move_to_empty(agent)
|
||||||
|
|
||||||
|
self.total_earnings = 0
|
||||||
|
self.add_model_reporter("total_earnings")
|
||||||
|
|
||||||
|
@report
|
||||||
|
@property
|
||||||
|
def number_passengers(self):
|
||||||
|
return self.count_agents(agent_class=Passenger)
|
||||||
|
|
||||||
|
|
||||||
|
class Driver(Evented, FSM):
|
||||||
|
pos = None
|
||||||
|
journey = None
|
||||||
|
earnings = 0
|
||||||
|
|
||||||
|
def on_receive(self, msg, sender):
|
||||||
|
"""This is not a state. It will run (and block) every time check_messages is invoked"""
|
||||||
|
if self.journey is None and isinstance(msg, Journey) and msg.driver is None:
|
||||||
|
msg.driver = self
|
||||||
|
self.journey = msg
|
||||||
|
|
||||||
|
def check_passengers(self):
|
||||||
|
"""If there are no more passengers, stop forever"""
|
||||||
|
c = self.count_agents(agent_class=Passenger)
|
||||||
|
self.debug(f"Passengers left {c}")
|
||||||
|
if not c:
|
||||||
|
self.die("No more passengers")
|
||||||
|
|
||||||
|
@default_state
|
||||||
|
@state
|
||||||
|
def wandering(self):
|
||||||
|
"""Move around the city until a journey is accepted"""
|
||||||
|
target = None
|
||||||
|
self.check_passengers()
|
||||||
|
self.journey = None
|
||||||
|
while self.journey is None: # No potential journeys detected (see on_receive)
|
||||||
|
if target is None or not self.move_towards(target):
|
||||||
|
target = self.random.choice(
|
||||||
|
self.model.grid.get_neighborhood(self.pos, moore=False)
|
||||||
|
)
|
||||||
|
|
||||||
|
self.check_passengers()
|
||||||
|
# This will call on_receive behind the scenes, and the agent's status will be updated
|
||||||
|
self.check_messages()
|
||||||
|
yield Delta(30) # Wait at least 30 seconds before checking again
|
||||||
|
|
||||||
|
try:
|
||||||
|
# Re-send the journey to the passenger, to confirm that we have been selected
|
||||||
|
self.journey = yield self.journey.passenger.ask(self.journey, timeout=60)
|
||||||
|
except events.TimedOut:
|
||||||
|
# No journey has been accepted. Try again
|
||||||
|
self.journey = None
|
||||||
|
return
|
||||||
|
|
||||||
|
return self.driving
|
||||||
|
|
||||||
|
@state
|
||||||
|
def driving(self):
|
||||||
|
"""The journey has been accepted. Pick them up and take them to their destination"""
|
||||||
|
self.info(f"Driving towards Passenger {self.journey.passenger.unique_id}")
|
||||||
|
while self.move_towards(self.journey.origin):
|
||||||
|
yield
|
||||||
|
self.info(f"Driving {self.journey.passenger.unique_id} from {self.journey.origin} to {self.journey.destination}")
|
||||||
|
while self.move_towards(self.journey.destination, with_passenger=True):
|
||||||
|
yield
|
||||||
|
self.info("Arrived at destination")
|
||||||
|
self.earnings += self.journey.tip
|
||||||
|
self.model.total_earnings += self.journey.tip
|
||||||
|
self.check_passengers()
|
||||||
|
return self.wandering
|
||||||
|
|
||||||
|
def move_towards(self, target, with_passenger=False):
|
||||||
|
"""Move one cell at a time towards a target"""
|
||||||
|
self.debug(f"Moving { self.pos } -> { target }")
|
||||||
|
if target[0] == self.pos[0] and target[1] == self.pos[1]:
|
||||||
|
return False
|
||||||
|
|
||||||
|
next_pos = [self.pos[0], self.pos[1]]
|
||||||
|
for idx in [0, 1]:
|
||||||
|
if self.pos[idx] < target[idx]:
|
||||||
|
next_pos[idx] += 1
|
||||||
|
break
|
||||||
|
if self.pos[idx] > target[idx]:
|
||||||
|
next_pos[idx] -= 1
|
||||||
|
break
|
||||||
|
self.model.grid.move_agent(self, tuple(next_pos))
|
||||||
|
if with_passenger:
|
||||||
|
self.journey.passenger.pos = (
|
||||||
|
self.pos
|
||||||
|
) # This could be communicated through messages
|
||||||
|
return True
|
||||||
|
|
||||||
|
|
||||||
|
class Passenger(Evented, FSM):
|
||||||
|
pos = None
|
||||||
|
|
||||||
|
def on_receive(self, msg, sender):
|
||||||
|
"""This is not a state. It will be run synchronously every time `check_messages` is run"""
|
||||||
|
|
||||||
|
if isinstance(msg, Journey):
|
||||||
|
self.journey = msg
|
||||||
|
return msg
|
||||||
|
|
||||||
|
@default_state
|
||||||
|
@state
|
||||||
|
def asking(self):
|
||||||
|
destination = (
|
||||||
|
self.random.randint(0, self.model.grid.height-1),
|
||||||
|
self.random.randint(0, self.model.grid.width-1),
|
||||||
|
)
|
||||||
|
self.journey = None
|
||||||
|
journey = Journey(
|
||||||
|
origin=self.pos,
|
||||||
|
destination=destination,
|
||||||
|
tip=self.random.randint(10, 100),
|
||||||
|
passenger=self,
|
||||||
|
)
|
||||||
|
|
||||||
|
timeout = 60
|
||||||
|
expiration = self.now + timeout
|
||||||
|
self.info(f"Asking for journey at: { self.pos }")
|
||||||
|
self.model.broadcast(journey, ttl=timeout, sender=self, agent_class=Driver)
|
||||||
|
while not self.journey:
|
||||||
|
self.debug(f"Waiting for responses at: { self.pos }")
|
||||||
|
try:
|
||||||
|
# This will call check_messages behind the scenes, and the agent's status will be updated
|
||||||
|
# If you want to avoid that, you can call it with: check=False
|
||||||
|
yield self.received(expiration=expiration)
|
||||||
|
except events.TimedOut:
|
||||||
|
self.info(f"Still no response. Waiting at: { self.pos }")
|
||||||
|
self.model.broadcast(
|
||||||
|
journey, ttl=timeout, sender=self, agent_class=Driver
|
||||||
|
)
|
||||||
|
expiration = self.now + timeout
|
||||||
|
self.info(f"Got a response! Waiting for driver")
|
||||||
|
return self.driving_home
|
||||||
|
|
||||||
|
@state
|
||||||
|
def driving_home(self):
|
||||||
|
while (
|
||||||
|
self.pos[0] != self.journey.destination[0]
|
||||||
|
or self.pos[1] != self.journey.destination[1]
|
||||||
|
):
|
||||||
|
try:
|
||||||
|
yield self.received(timeout=60)
|
||||||
|
except events.TimedOut:
|
||||||
|
pass
|
||||||
|
|
||||||
|
self.die("Got home safe!")
|
||||||
|
|
||||||
|
|
||||||
|
simulation = Simulation(name="RideHailing",
|
||||||
|
model=City,
|
||||||
|
seed="carsSeed",
|
||||||
|
max_time=1000,
|
||||||
|
parameters=dict(n_passengers=2))
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
easy(simulation)
|
@ -1,21 +0,0 @@
|
|||||||
---
|
|
||||||
name: mesa_sim
|
|
||||||
group: tests
|
|
||||||
dir_path: "/tmp"
|
|
||||||
num_trials: 3
|
|
||||||
max_time: 100
|
|
||||||
interval: 1
|
|
||||||
seed: '1'
|
|
||||||
network_params:
|
|
||||||
generator: social_wealth.graph_generator
|
|
||||||
n: 5
|
|
||||||
network_agents:
|
|
||||||
- agent_type: social_wealth.SocialMoneyAgent
|
|
||||||
weight: 1
|
|
||||||
environment_class: social_wealth.MoneyEnv
|
|
||||||
environment_params:
|
|
||||||
num_mesa_agents: 5
|
|
||||||
mesa_agent_type: social_wealth.MoneyAgent
|
|
||||||
N: 10
|
|
||||||
width: 50
|
|
||||||
height: 50
|
|
7
examples/mesa/mesa_sim.py
Normal file
@ -0,0 +1,7 @@
|
|||||||
|
from soil import Simulation
|
||||||
|
from social_wealth import MoneyEnv, graph_generator
|
||||||
|
|
||||||
|
sim = Simulation(name="mesa_sim", dump=False, max_steps=10, interval=2, model=MoneyEnv, parameters=dict(generator=graph_generator, N=10, width=50, height=50))
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
sim.run()
|
@ -1,7 +1,8 @@
|
|||||||
from mesa.visualization.ModularVisualization import ModularServer
|
from mesa.visualization.ModularVisualization import ModularServer
|
||||||
from soil.visualization import UserSettableParameter
|
from mesa.visualization.UserParam import Slider, Choice
|
||||||
from mesa.visualization.modules import ChartModule, NetworkModule, CanvasGrid
|
from mesa.visualization.modules import ChartModule, NetworkModule, CanvasGrid
|
||||||
from social_wealth import MoneyEnv, graph_generator, SocialMoneyAgent
|
from social_wealth import MoneyEnv, graph_generator, SocialMoneyAgent
|
||||||
|
import networkx as nx
|
||||||
|
|
||||||
|
|
||||||
class MyNetwork(NetworkModule):
|
class MyNetwork(NetworkModule):
|
||||||
@ -13,15 +14,18 @@ def network_portrayal(env):
|
|||||||
# The model ensures there is 0 or 1 agent per node
|
# The model ensures there is 0 or 1 agent per node
|
||||||
|
|
||||||
portrayal = dict()
|
portrayal = dict()
|
||||||
|
wealths = {
|
||||||
|
node_id: data["agent"].wealth for (node_id, data) in env.G.nodes(data=True)
|
||||||
|
}
|
||||||
portrayal["nodes"] = [
|
portrayal["nodes"] = [
|
||||||
{
|
{
|
||||||
"id": agent_id,
|
"id": node_id,
|
||||||
"size": env.get_agent(agent_id).wealth,
|
"size": 2 * (wealth + 1),
|
||||||
# "color": "#CC0000" if not agents or agents[0].wealth == 0 else "#007959",
|
"color": "#CC0000" if wealth == 0 else "#007959",
|
||||||
"color": "#CC0000",
|
# "color": "#CC0000",
|
||||||
"label": f"{agent_id}: {env.get_agent(agent_id).wealth}",
|
"label": f"{node_id}: {wealth}",
|
||||||
}
|
}
|
||||||
for (agent_id) in env.G.nodes
|
for (node_id, wealth) in wealths.items()
|
||||||
]
|
]
|
||||||
|
|
||||||
portrayal["edges"] = [
|
portrayal["edges"] = [
|
||||||
@ -29,7 +33,6 @@ def network_portrayal(env):
|
|||||||
for edge_id, (source, target) in enumerate(env.G.edges)
|
for edge_id, (source, target) in enumerate(env.G.edges)
|
||||||
]
|
]
|
||||||
|
|
||||||
|
|
||||||
return portrayal
|
return portrayal
|
||||||
|
|
||||||
|
|
||||||
@ -51,18 +54,17 @@ def gridPortrayal(agent):
|
|||||||
"Text": agent.unique_id,
|
"Text": agent.unique_id,
|
||||||
"x": agent.pos[0],
|
"x": agent.pos[0],
|
||||||
"y": agent.pos[1],
|
"y": agent.pos[1],
|
||||||
"Color": f"rgba(31, 10, 255, 0.{color})"
|
"Color": f"rgba(31, 10, 255, 0.{color})",
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
grid = MyNetwork(network_portrayal, 500, 500, library="sigma")
|
grid = MyNetwork(network_portrayal, 500, 500)
|
||||||
chart = ChartModule(
|
chart = ChartModule(
|
||||||
[{"Label": "Gini", "Color": "Black"}], data_collector_name="datacollector"
|
[{"Label": "Gini", "Color": "Black"}], data_collector_name="datacollector"
|
||||||
)
|
)
|
||||||
|
|
||||||
model_params = {
|
parameters = {
|
||||||
"N": UserSettableParameter(
|
"N": Slider(
|
||||||
"slider",
|
|
||||||
"N",
|
"N",
|
||||||
5,
|
5,
|
||||||
1,
|
1,
|
||||||
@ -70,9 +72,7 @@ model_params = {
|
|||||||
1,
|
1,
|
||||||
description="Choose how many agents to include in the model",
|
description="Choose how many agents to include in the model",
|
||||||
),
|
),
|
||||||
"network_agents": [{"agent_type": SocialMoneyAgent}],
|
"height": Slider(
|
||||||
"height": UserSettableParameter(
|
|
||||||
"slider",
|
|
||||||
"height",
|
"height",
|
||||||
5,
|
5,
|
||||||
5,
|
5,
|
||||||
@ -80,8 +80,7 @@ model_params = {
|
|||||||
1,
|
1,
|
||||||
description="Grid height",
|
description="Grid height",
|
||||||
),
|
),
|
||||||
"width": UserSettableParameter(
|
"width": Slider(
|
||||||
"slider",
|
|
||||||
"width",
|
"width",
|
||||||
5,
|
5,
|
||||||
5,
|
5,
|
||||||
@ -89,17 +88,24 @@ model_params = {
|
|||||||
1,
|
1,
|
||||||
description="Grid width",
|
description="Grid width",
|
||||||
),
|
),
|
||||||
"network_params": {
|
"agent_class": Choice(
|
||||||
'generator': graph_generator
|
"Agent class",
|
||||||
},
|
value="MoneyAgent",
|
||||||
|
choices=["MoneyAgent", "SocialMoneyAgent"],
|
||||||
|
),
|
||||||
|
"generator": graph_generator,
|
||||||
}
|
}
|
||||||
|
|
||||||
canvas_element = CanvasGrid(gridPortrayal, model_params["width"].value, model_params["height"].value, 500, 500)
|
|
||||||
|
canvas_element = CanvasGrid(
|
||||||
|
gridPortrayal, parameters["width"].value, parameters["height"].value, 500, 500
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
server = ModularServer(
|
server = ModularServer(
|
||||||
MoneyEnv, [grid, chart, canvas_element], "Money Model", model_params
|
MoneyEnv, [grid, chart, canvas_element], "Money Model", parameters
|
||||||
)
|
)
|
||||||
server.port = 8521
|
server.port = 8521
|
||||||
|
|
||||||
|
if __name__ == '__main__':
|
||||||
server.launch(open_browser=False)
|
server.launch(open_browser=False)
|
||||||
|
@ -1,23 +1,26 @@
|
|||||||
'''
|
"""
|
||||||
This is an example that adds soil agents and environment in a normal
|
This is an example that adds soil agents and environment in a normal
|
||||||
mesa workflow.
|
mesa workflow.
|
||||||
'''
|
"""
|
||||||
from mesa import Agent as MesaAgent
|
from mesa import Agent as MesaAgent
|
||||||
from mesa.space import MultiGrid
|
from mesa.space import MultiGrid
|
||||||
|
|
||||||
# from mesa.time import RandomActivation
|
# from mesa.time import RandomActivation
|
||||||
from mesa.datacollection import DataCollector
|
from mesa.datacollection import DataCollector
|
||||||
from mesa.batchrunner import BatchRunner
|
from mesa.batchrunner import BatchRunner
|
||||||
|
|
||||||
import networkx as nx
|
import networkx as nx
|
||||||
|
|
||||||
from soil import NetworkAgent, Environment
|
from soil import NetworkAgent, Environment, serialization
|
||||||
|
|
||||||
|
|
||||||
def compute_gini(model):
|
def compute_gini(model):
|
||||||
agent_wealths = [agent.wealth for agent in model.agents]
|
agent_wealths = [agent.wealth for agent in model.agents]
|
||||||
x = sorted(agent_wealths)
|
x = sorted(agent_wealths)
|
||||||
N = len(list(model.agents))
|
N = len(list(model.agents))
|
||||||
B = sum(xi * (N - i) for i, xi in enumerate(x)) / (N * sum(x))
|
B = sum(xi * (N - i) for i, xi in enumerate(x)) / (N * sum(x))
|
||||||
return (1 + (1/N) - 2*B)
|
return 1 + (1 / N) - 2 * B
|
||||||
|
|
||||||
|
|
||||||
class MoneyAgent(MesaAgent):
|
class MoneyAgent(MesaAgent):
|
||||||
"""
|
"""
|
||||||
@ -25,15 +28,14 @@ class MoneyAgent(MesaAgent):
|
|||||||
It will only share wealth with neighbors based on grid proximity
|
It will only share wealth with neighbors based on grid proximity
|
||||||
"""
|
"""
|
||||||
|
|
||||||
def __init__(self, unique_id, model):
|
def __init__(self, unique_id, model, wealth=1, **kwargs):
|
||||||
super().__init__(unique_id=unique_id, model=model)
|
super().__init__(unique_id=unique_id, model=model)
|
||||||
self.wealth = 1
|
self.wealth = wealth
|
||||||
|
|
||||||
def move(self):
|
def move(self):
|
||||||
possible_steps = self.model.grid.get_neighborhood(
|
possible_steps = self.model.grid.get_neighborhood(
|
||||||
self.pos,
|
self.pos, moore=True, include_center=False
|
||||||
moore=True,
|
)
|
||||||
include_center=False)
|
|
||||||
new_position = self.random.choice(possible_steps)
|
new_position = self.random.choice(possible_steps)
|
||||||
self.model.grid.move_agent(self, new_position)
|
self.model.grid.move_agent(self, new_position)
|
||||||
|
|
||||||
@ -45,21 +47,21 @@ class MoneyAgent(MesaAgent):
|
|||||||
self.wealth -= 1
|
self.wealth -= 1
|
||||||
|
|
||||||
def step(self):
|
def step(self):
|
||||||
self.info("Crying wolf", self.pos)
|
print("Crying wolf", self.pos)
|
||||||
self.move()
|
self.move()
|
||||||
if self.wealth > 0:
|
if self.wealth > 0:
|
||||||
self.give_money()
|
self.give_money()
|
||||||
|
|
||||||
|
|
||||||
class SocialMoneyAgent(NetworkAgent, MoneyAgent):
|
class SocialMoneyAgent(MoneyAgent, NetworkAgent):
|
||||||
wealth = 1
|
wealth = 1
|
||||||
|
|
||||||
def give_money(self):
|
def give_money(self):
|
||||||
cellmates = set(self.model.grid.get_cell_list_contents([self.pos]))
|
cellmates = set(self.model.grid.get_cell_list_contents([self.pos]))
|
||||||
friends = set(self.get_neighboring_agents())
|
friends = set(self.get_neighbors())
|
||||||
self.info("Trying to give money")
|
self.info("Trying to give money")
|
||||||
self.debug("Cellmates: ", cellmates)
|
self.info("Cellmates: ", cellmates)
|
||||||
self.debug("Friends: ", friends)
|
self.info("Friends: ", friends)
|
||||||
|
|
||||||
nearby_friends = list(cellmates & friends)
|
nearby_friends = list(cellmates & friends)
|
||||||
|
|
||||||
@ -69,14 +71,35 @@ class SocialMoneyAgent(NetworkAgent, MoneyAgent):
|
|||||||
self.wealth -= 1
|
self.wealth -= 1
|
||||||
|
|
||||||
|
|
||||||
|
def graph_generator(n=5):
|
||||||
|
G = nx.Graph()
|
||||||
|
for ix in range(n):
|
||||||
|
G.add_edge(0, ix)
|
||||||
|
return G
|
||||||
|
|
||||||
|
|
||||||
class MoneyEnv(Environment):
|
class MoneyEnv(Environment):
|
||||||
"""A model with some number of agents."""
|
"""A model with some number of agents."""
|
||||||
def __init__(self, N, width, height, *args, network_params, **kwargs):
|
|
||||||
|
|
||||||
network_params['n'] = N
|
def __init__(
|
||||||
super().__init__(*args, network_params=network_params, **kwargs)
|
self,
|
||||||
|
width,
|
||||||
|
height,
|
||||||
|
N,
|
||||||
|
generator=graph_generator,
|
||||||
|
agent_class=SocialMoneyAgent,
|
||||||
|
topology=None,
|
||||||
|
**kwargs
|
||||||
|
):
|
||||||
|
|
||||||
|
generator = serialization.deserialize(generator)
|
||||||
|
agent_class = serialization.deserialize(agent_class, globs=globals())
|
||||||
|
topology = generator(n=N)
|
||||||
|
super().__init__(topology=topology, N=N, **kwargs)
|
||||||
self.grid = MultiGrid(width, height, False)
|
self.grid = MultiGrid(width, height, False)
|
||||||
|
|
||||||
|
self.populate_network(agent_class=agent_class)
|
||||||
|
|
||||||
# Create agents
|
# Create agents
|
||||||
for agent in self.agents:
|
for agent in self.agents:
|
||||||
x = self.random.randrange(self.grid.width)
|
x = self.random.randrange(self.grid.width)
|
||||||
@ -84,37 +107,31 @@ class MoneyEnv(Environment):
|
|||||||
self.grid.place_agent(agent, (x, y))
|
self.grid.place_agent(agent, (x, y))
|
||||||
|
|
||||||
self.datacollector = DataCollector(
|
self.datacollector = DataCollector(
|
||||||
model_reporters={"Gini": compute_gini},
|
model_reporters={"Gini": compute_gini}, agent_reporters={"Wealth": "wealth"}
|
||||||
agent_reporters={"Wealth": "wealth"})
|
)
|
||||||
|
|
||||||
|
|
||||||
def graph_generator(n=5):
|
if __name__ == "__main__":
|
||||||
G = nx.Graph()
|
|
||||||
for ix in range(n):
|
|
||||||
G.add_edge(0, ix)
|
|
||||||
return G
|
|
||||||
|
|
||||||
if __name__ == '__main__':
|
fixed_params = {
|
||||||
|
"generator": nx.complete_graph,
|
||||||
|
|
||||||
G = graph_generator()
|
|
||||||
fixed_params = {"topology": G,
|
|
||||||
"width": 10,
|
"width": 10,
|
||||||
"network_agents": [{"agent_type": SocialMoneyAgent,
|
"network_agents": [{"agent_class": SocialMoneyAgent, "weight": 1}],
|
||||||
'weight': 1}],
|
"height": 10,
|
||||||
"height": 10}
|
}
|
||||||
|
|
||||||
variable_params = {"N": range(10, 100, 10)}
|
variable_params = {"N": range(10, 100, 10)}
|
||||||
|
|
||||||
batch_run = BatchRunner(MoneyEnv,
|
batch_run = BatchRunner(
|
||||||
|
MoneyEnv,
|
||||||
variable_parameters=variable_params,
|
variable_parameters=variable_params,
|
||||||
fixed_parameters=fixed_params,
|
fixed_parameters=fixed_params,
|
||||||
iterations=5,
|
iterations=5,
|
||||||
max_steps=100,
|
max_steps=100,
|
||||||
model_reporters={"Gini": compute_gini})
|
model_reporters={"Gini": compute_gini},
|
||||||
|
)
|
||||||
batch_run.run_all()
|
batch_run.run_all()
|
||||||
|
|
||||||
run_data = batch_run.get_model_vars_dataframe()
|
run_data = batch_run.get_model_vars_dataframe()
|
||||||
run_data.head()
|
run_data.head()
|
||||||
print(run_data.Gini)
|
print(run_data.Gini)
|
||||||
|
|
||||||
|
@ -4,24 +4,26 @@ from mesa.time import RandomActivation
|
|||||||
from mesa.datacollection import DataCollector
|
from mesa.datacollection import DataCollector
|
||||||
from mesa.batchrunner import BatchRunner
|
from mesa.batchrunner import BatchRunner
|
||||||
|
|
||||||
|
|
||||||
def compute_gini(model):
|
def compute_gini(model):
|
||||||
agent_wealths = [agent.wealth for agent in model.schedule.agents]
|
agent_wealths = [agent.wealth for agent in model.schedule.agents]
|
||||||
x = sorted(agent_wealths)
|
x = sorted(agent_wealths)
|
||||||
N = model.num_agents
|
N = model.num_agents
|
||||||
B = sum(xi * (N - i) for i, xi in enumerate(x)) / (N * sum(x))
|
B = sum(xi * (N - i) for i, xi in enumerate(x)) / (N * sum(x))
|
||||||
return (1 + (1/N) - 2*B)
|
return 1 + (1 / N) - 2 * B
|
||||||
|
|
||||||
|
|
||||||
class MoneyAgent(Agent):
|
class MoneyAgent(Agent):
|
||||||
"""An agent with fixed initial wealth."""
|
"""An agent with fixed initial wealth."""
|
||||||
|
|
||||||
def __init__(self, unique_id, model):
|
def __init__(self, unique_id, model):
|
||||||
super().__init__(unique_id, model)
|
super().__init__(unique_id, model)
|
||||||
self.wealth = 1
|
self.wealth = 1
|
||||||
|
|
||||||
def move(self):
|
def move(self):
|
||||||
possible_steps = self.model.grid.get_neighborhood(
|
possible_steps = self.model.grid.get_neighborhood(
|
||||||
self.pos,
|
self.pos, moore=True, include_center=False
|
||||||
moore=True,
|
)
|
||||||
include_center=False)
|
|
||||||
new_position = self.random.choice(possible_steps)
|
new_position = self.random.choice(possible_steps)
|
||||||
self.model.grid.move_agent(self, new_position)
|
self.model.grid.move_agent(self, new_position)
|
||||||
|
|
||||||
@ -37,8 +39,10 @@ class MoneyAgent(Agent):
|
|||||||
if self.wealth > 0:
|
if self.wealth > 0:
|
||||||
self.give_money()
|
self.give_money()
|
||||||
|
|
||||||
|
|
||||||
class MoneyModel(Model):
|
class MoneyModel(Model):
|
||||||
"""A model with some number of agents."""
|
"""A model with some number of agents."""
|
||||||
|
|
||||||
def __init__(self, N, width, height):
|
def __init__(self, N, width, height):
|
||||||
self.num_agents = N
|
self.num_agents = N
|
||||||
self.grid = MultiGrid(width, height, True)
|
self.grid = MultiGrid(width, height, True)
|
||||||
@ -55,29 +59,29 @@ class MoneyModel(Model):
|
|||||||
self.grid.place_agent(a, (x, y))
|
self.grid.place_agent(a, (x, y))
|
||||||
|
|
||||||
self.datacollector = DataCollector(
|
self.datacollector = DataCollector(
|
||||||
model_reporters={"Gini": compute_gini},
|
model_reporters={"Gini": compute_gini}, agent_reporters={"Wealth": "wealth"}
|
||||||
agent_reporters={"Wealth": "wealth"})
|
)
|
||||||
|
|
||||||
def step(self):
|
def step(self):
|
||||||
self.datacollector.collect(self)
|
self.datacollector.collect(self)
|
||||||
self.schedule.step()
|
self.schedule.step()
|
||||||
|
|
||||||
|
|
||||||
if __name__ == '__main__':
|
if __name__ == "__main__":
|
||||||
|
|
||||||
fixed_params = {"width": 10,
|
fixed_params = {"width": 10, "height": 10}
|
||||||
"height": 10}
|
|
||||||
variable_params = {"N": range(10, 500, 10)}
|
variable_params = {"N": range(10, 500, 10)}
|
||||||
|
|
||||||
batch_run = BatchRunner(MoneyModel,
|
batch_run = BatchRunner(
|
||||||
|
MoneyModel,
|
||||||
variable_params,
|
variable_params,
|
||||||
fixed_params,
|
fixed_params,
|
||||||
iterations=5,
|
iterations=5,
|
||||||
max_steps=100,
|
max_steps=100,
|
||||||
model_reporters={"Gini": compute_gini})
|
model_reporters={"Gini": compute_gini},
|
||||||
|
)
|
||||||
batch_run.run_all()
|
batch_run.run_all()
|
||||||
|
|
||||||
run_data = batch_run.get_model_vars_dataframe()
|
run_data = batch_run.get_model_vars_dataframe()
|
||||||
run_data.head()
|
run_data.head()
|
||||||
print(run_data.Gini)
|
print(run_data.Gini)
|
||||||
|
|
||||||
|
@ -80,11 +80,11 @@
|
|||||||
"max_time: 300\r\n",
|
"max_time: 300\r\n",
|
||||||
"name: Sim_all_dumb\r\n",
|
"name: Sim_all_dumb\r\n",
|
||||||
"network_agents:\r\n",
|
"network_agents:\r\n",
|
||||||
"- agent_type: DumbViewer\r\n",
|
"- agent_class: DumbViewer\r\n",
|
||||||
" state:\r\n",
|
" state:\r\n",
|
||||||
" has_tv: false\r\n",
|
" has_tv: false\r\n",
|
||||||
" weight: 1\r\n",
|
" weight: 1\r\n",
|
||||||
"- agent_type: DumbViewer\r\n",
|
"- agent_class: DumbViewer\r\n",
|
||||||
" state:\r\n",
|
" state:\r\n",
|
||||||
" has_tv: true\r\n",
|
" has_tv: true\r\n",
|
||||||
" weight: 1\r\n",
|
" weight: 1\r\n",
|
||||||
@ -104,19 +104,19 @@
|
|||||||
"max_time: 300\r\n",
|
"max_time: 300\r\n",
|
||||||
"name: Sim_half_herd\r\n",
|
"name: Sim_half_herd\r\n",
|
||||||
"network_agents:\r\n",
|
"network_agents:\r\n",
|
||||||
"- agent_type: DumbViewer\r\n",
|
"- agent_class: DumbViewer\r\n",
|
||||||
" state:\r\n",
|
" state:\r\n",
|
||||||
" has_tv: false\r\n",
|
" has_tv: false\r\n",
|
||||||
" weight: 1\r\n",
|
" weight: 1\r\n",
|
||||||
"- agent_type: DumbViewer\r\n",
|
"- agent_class: DumbViewer\r\n",
|
||||||
" state:\r\n",
|
" state:\r\n",
|
||||||
" has_tv: true\r\n",
|
" has_tv: true\r\n",
|
||||||
" weight: 1\r\n",
|
" weight: 1\r\n",
|
||||||
"- agent_type: HerdViewer\r\n",
|
"- agent_class: HerdViewer\r\n",
|
||||||
" state:\r\n",
|
" state:\r\n",
|
||||||
" has_tv: false\r\n",
|
" has_tv: false\r\n",
|
||||||
" weight: 1\r\n",
|
" weight: 1\r\n",
|
||||||
"- agent_type: HerdViewer\r\n",
|
"- agent_class: HerdViewer\r\n",
|
||||||
" state:\r\n",
|
" state:\r\n",
|
||||||
" has_tv: true\r\n",
|
" has_tv: true\r\n",
|
||||||
" weight: 1\r\n",
|
" weight: 1\r\n",
|
||||||
@ -136,12 +136,12 @@
|
|||||||
"max_time: 300\r\n",
|
"max_time: 300\r\n",
|
||||||
"name: Sim_all_herd\r\n",
|
"name: Sim_all_herd\r\n",
|
||||||
"network_agents:\r\n",
|
"network_agents:\r\n",
|
||||||
"- agent_type: HerdViewer\r\n",
|
"- agent_class: HerdViewer\r\n",
|
||||||
" state:\r\n",
|
" state:\r\n",
|
||||||
" has_tv: true\r\n",
|
" has_tv: true\r\n",
|
||||||
" state_id: neutral\r\n",
|
" state_id: neutral\r\n",
|
||||||
" weight: 1\r\n",
|
" weight: 1\r\n",
|
||||||
"- agent_type: HerdViewer\r\n",
|
"- agent_class: HerdViewer\r\n",
|
||||||
" state:\r\n",
|
" state:\r\n",
|
||||||
" has_tv: true\r\n",
|
" has_tv: true\r\n",
|
||||||
" state_id: neutral\r\n",
|
" state_id: neutral\r\n",
|
||||||
@ -163,12 +163,12 @@
|
|||||||
"max_time: 300\r\n",
|
"max_time: 300\r\n",
|
||||||
"name: Sim_wise_herd\r\n",
|
"name: Sim_wise_herd\r\n",
|
||||||
"network_agents:\r\n",
|
"network_agents:\r\n",
|
||||||
"- agent_type: HerdViewer\r\n",
|
"- agent_class: HerdViewer\r\n",
|
||||||
" state:\r\n",
|
" state:\r\n",
|
||||||
" has_tv: true\r\n",
|
" has_tv: true\r\n",
|
||||||
" state_id: neutral\r\n",
|
" state_id: neutral\r\n",
|
||||||
" weight: 1\r\n",
|
" weight: 1\r\n",
|
||||||
"- agent_type: WiseViewer\r\n",
|
"- agent_class: WiseViewer\r\n",
|
||||||
" state:\r\n",
|
" state:\r\n",
|
||||||
" has_tv: true\r\n",
|
" has_tv: true\r\n",
|
||||||
" weight: 1\r\n",
|
" weight: 1\r\n",
|
||||||
@ -189,12 +189,12 @@
|
|||||||
"max_time: 300\r\n",
|
"max_time: 300\r\n",
|
||||||
"name: Sim_all_wise\r\n",
|
"name: Sim_all_wise\r\n",
|
||||||
"network_agents:\r\n",
|
"network_agents:\r\n",
|
||||||
"- agent_type: WiseViewer\r\n",
|
"- agent_class: WiseViewer\r\n",
|
||||||
" state:\r\n",
|
" state:\r\n",
|
||||||
" has_tv: true\r\n",
|
" has_tv: true\r\n",
|
||||||
" state_id: neutral\r\n",
|
" state_id: neutral\r\n",
|
||||||
" weight: 1\r\n",
|
" weight: 1\r\n",
|
||||||
"- agent_type: WiseViewer\r\n",
|
"- agent_class: WiseViewer\r\n",
|
||||||
" state:\r\n",
|
" state:\r\n",
|
||||||
" has_tv: true\r\n",
|
" has_tv: true\r\n",
|
||||||
" weight: 1\r\n",
|
" weight: 1\r\n",
|
||||||
|
@ -1,138 +0,0 @@
|
|||||||
---
|
|
||||||
default_state: {}
|
|
||||||
load_module: newsspread
|
|
||||||
environment_agents: []
|
|
||||||
environment_params:
|
|
||||||
prob_neighbor_spread: 0.0
|
|
||||||
prob_tv_spread: 0.01
|
|
||||||
interval: 1
|
|
||||||
max_time: 300
|
|
||||||
name: Sim_all_dumb
|
|
||||||
network_agents:
|
|
||||||
- agent_type: DumbViewer
|
|
||||||
state:
|
|
||||||
has_tv: false
|
|
||||||
weight: 1
|
|
||||||
- agent_type: DumbViewer
|
|
||||||
state:
|
|
||||||
has_tv: true
|
|
||||||
weight: 1
|
|
||||||
network_params:
|
|
||||||
generator: barabasi_albert_graph
|
|
||||||
n: 500
|
|
||||||
m: 5
|
|
||||||
num_trials: 50
|
|
||||||
---
|
|
||||||
default_state: {}
|
|
||||||
load_module: newsspread
|
|
||||||
environment_agents: []
|
|
||||||
environment_params:
|
|
||||||
prob_neighbor_spread: 0.0
|
|
||||||
prob_tv_spread: 0.01
|
|
||||||
interval: 1
|
|
||||||
max_time: 300
|
|
||||||
name: Sim_half_herd
|
|
||||||
network_agents:
|
|
||||||
- agent_type: DumbViewer
|
|
||||||
state:
|
|
||||||
has_tv: false
|
|
||||||
weight: 1
|
|
||||||
- agent_type: DumbViewer
|
|
||||||
state:
|
|
||||||
has_tv: true
|
|
||||||
weight: 1
|
|
||||||
- agent_type: HerdViewer
|
|
||||||
state:
|
|
||||||
has_tv: false
|
|
||||||
weight: 1
|
|
||||||
- agent_type: HerdViewer
|
|
||||||
state:
|
|
||||||
has_tv: true
|
|
||||||
weight: 1
|
|
||||||
network_params:
|
|
||||||
generator: barabasi_albert_graph
|
|
||||||
n: 500
|
|
||||||
m: 5
|
|
||||||
num_trials: 50
|
|
||||||
---
|
|
||||||
default_state: {}
|
|
||||||
load_module: newsspread
|
|
||||||
environment_agents: []
|
|
||||||
environment_params:
|
|
||||||
prob_neighbor_spread: 0.0
|
|
||||||
prob_tv_spread: 0.01
|
|
||||||
interval: 1
|
|
||||||
max_time: 300
|
|
||||||
name: Sim_all_herd
|
|
||||||
network_agents:
|
|
||||||
- agent_type: HerdViewer
|
|
||||||
state:
|
|
||||||
has_tv: true
|
|
||||||
state_id: neutral
|
|
||||||
weight: 1
|
|
||||||
- agent_type: HerdViewer
|
|
||||||
state:
|
|
||||||
has_tv: true
|
|
||||||
state_id: neutral
|
|
||||||
weight: 1
|
|
||||||
network_params:
|
|
||||||
generator: barabasi_albert_graph
|
|
||||||
n: 500
|
|
||||||
m: 5
|
|
||||||
num_trials: 50
|
|
||||||
---
|
|
||||||
default_state: {}
|
|
||||||
load_module: newsspread
|
|
||||||
environment_agents: []
|
|
||||||
environment_params:
|
|
||||||
prob_neighbor_spread: 0.0
|
|
||||||
prob_tv_spread: 0.01
|
|
||||||
prob_neighbor_cure: 0.1
|
|
||||||
interval: 1
|
|
||||||
max_time: 300
|
|
||||||
name: Sim_wise_herd
|
|
||||||
network_agents:
|
|
||||||
- agent_type: HerdViewer
|
|
||||||
state:
|
|
||||||
has_tv: true
|
|
||||||
state_id: neutral
|
|
||||||
weight: 1
|
|
||||||
- agent_type: WiseViewer
|
|
||||||
state:
|
|
||||||
has_tv: true
|
|
||||||
weight: 1
|
|
||||||
network_params:
|
|
||||||
generator: barabasi_albert_graph
|
|
||||||
n: 500
|
|
||||||
m: 5
|
|
||||||
num_trials: 50
|
|
||||||
---
|
|
||||||
default_state: {}
|
|
||||||
load_module: newsspread
|
|
||||||
environment_agents: []
|
|
||||||
environment_params:
|
|
||||||
prob_neighbor_spread: 0.0
|
|
||||||
prob_tv_spread: 0.01
|
|
||||||
prob_neighbor_cure: 0.1
|
|
||||||
interval: 1
|
|
||||||
max_time: 300
|
|
||||||
name: Sim_all_wise
|
|
||||||
network_agents:
|
|
||||||
- agent_type: WiseViewer
|
|
||||||
state:
|
|
||||||
has_tv: true
|
|
||||||
state_id: neutral
|
|
||||||
weight: 1
|
|
||||||
- agent_type: WiseViewer
|
|
||||||
state:
|
|
||||||
has_tv: true
|
|
||||||
weight: 1
|
|
||||||
network_params:
|
|
||||||
generator: barabasi_albert_graph
|
|
||||||
n: 500
|
|
||||||
m: 5
|
|
||||||
network_params:
|
|
||||||
generator: barabasi_albert_graph
|
|
||||||
n: 500
|
|
||||||
m: 5
|
|
||||||
num_trials: 50
|
|
@ -1,86 +0,0 @@
|
|||||||
from soil.agents import FSM, state, default_state, prob
|
|
||||||
import logging
|
|
||||||
|
|
||||||
|
|
||||||
class DumbViewer(FSM):
|
|
||||||
'''
|
|
||||||
A viewer that gets infected via TV (if it has one) and tries to infect
|
|
||||||
its neighbors once it's infected.
|
|
||||||
'''
|
|
||||||
defaults = {
|
|
||||||
'prob_neighbor_spread': 0.5,
|
|
||||||
'prob_tv_spread': 0.1,
|
|
||||||
}
|
|
||||||
|
|
||||||
@default_state
|
|
||||||
@state
|
|
||||||
def neutral(self):
|
|
||||||
if self['has_tv']:
|
|
||||||
if prob(self.env['prob_tv_spread']):
|
|
||||||
return self.infected
|
|
||||||
|
|
||||||
@state
|
|
||||||
def infected(self):
|
|
||||||
for neighbor in self.get_neighboring_agents(state_id=self.neutral.id):
|
|
||||||
if prob(self.env['prob_neighbor_spread']):
|
|
||||||
neighbor.infect()
|
|
||||||
|
|
||||||
def infect(self):
|
|
||||||
'''
|
|
||||||
This is not a state. It is a function that other agents can use to try to
|
|
||||||
infect this agent. DumbViewer always gets infected, but other agents like
|
|
||||||
HerdViewer might not become infected right away
|
|
||||||
'''
|
|
||||||
|
|
||||||
self.set_state(self.infected)
|
|
||||||
|
|
||||||
|
|
||||||
class HerdViewer(DumbViewer):
|
|
||||||
'''
|
|
||||||
A viewer whose probability of infection depends on the state of its neighbors.
|
|
||||||
'''
|
|
||||||
|
|
||||||
def infect(self):
|
|
||||||
'''Notice again that this is NOT a state. See DumbViewer.infect for reference'''
|
|
||||||
infected = self.count_neighboring_agents(state_id=self.infected.id)
|
|
||||||
total = self.count_neighboring_agents()
|
|
||||||
prob_infect = self.env['prob_neighbor_spread'] * infected/total
|
|
||||||
self.debug('prob_infect', prob_infect)
|
|
||||||
if prob(prob_infect):
|
|
||||||
self.set_state(self.infected)
|
|
||||||
|
|
||||||
|
|
||||||
class WiseViewer(HerdViewer):
|
|
||||||
'''
|
|
||||||
A viewer that can change its mind.
|
|
||||||
'''
|
|
||||||
|
|
||||||
defaults = {
|
|
||||||
'prob_neighbor_spread': 0.5,
|
|
||||||
'prob_neighbor_cure': 0.25,
|
|
||||||
'prob_tv_spread': 0.1,
|
|
||||||
}
|
|
||||||
|
|
||||||
@state
|
|
||||||
def cured(self):
|
|
||||||
prob_cure = self.env['prob_neighbor_cure']
|
|
||||||
for neighbor in self.get_neighboring_agents(state_id=self.infected.id):
|
|
||||||
if prob(prob_cure):
|
|
||||||
try:
|
|
||||||
neighbor.cure()
|
|
||||||
except AttributeError:
|
|
||||||
self.debug('Viewer {} cannot be cured'.format(neighbor.id))
|
|
||||||
|
|
||||||
def cure(self):
|
|
||||||
self.set_state(self.cured.id)
|
|
||||||
|
|
||||||
@state
|
|
||||||
def infected(self):
|
|
||||||
cured = max(self.count_neighboring_agents(self.cured.id),
|
|
||||||
1.0)
|
|
||||||
infected = max(self.count_neighboring_agents(self.infected.id),
|
|
||||||
1.0)
|
|
||||||
prob_cure = self.env['prob_neighbor_cure'] * (cured/infected)
|
|
||||||
if prob(prob_cure):
|
|
||||||
return self.cured
|
|
||||||
return self.set_state(super().infected)
|
|
134
examples/newsspread/newsspread_sim.py
Normal file
@ -0,0 +1,134 @@
|
|||||||
|
from soil.agents import FSM, NetworkAgent, state, default_state, prob
|
||||||
|
from soil.parameters import *
|
||||||
|
import logging
|
||||||
|
|
||||||
|
from soil.environment import Environment
|
||||||
|
|
||||||
|
|
||||||
|
class DumbViewer(FSM, NetworkAgent):
|
||||||
|
"""
|
||||||
|
A viewer that gets infected via TV (if it has one) and tries to infect
|
||||||
|
its neighbors once it's infected.
|
||||||
|
"""
|
||||||
|
|
||||||
|
has_been_infected: bool = False
|
||||||
|
has_tv: bool = False
|
||||||
|
|
||||||
|
@default_state
|
||||||
|
@state
|
||||||
|
def neutral(self):
|
||||||
|
if self.has_tv:
|
||||||
|
if self.prob(self.get("prob_tv_spread")):
|
||||||
|
return self.infected
|
||||||
|
if self.has_been_infected:
|
||||||
|
return self.infected
|
||||||
|
|
||||||
|
@state
|
||||||
|
def infected(self):
|
||||||
|
for neighbor in self.get_neighbors(state_id=self.neutral.id):
|
||||||
|
if self.prob(self.get("prob_neighbor_spread")):
|
||||||
|
neighbor.infect()
|
||||||
|
|
||||||
|
def infect(self):
|
||||||
|
"""
|
||||||
|
This is not a state. It is a function that other agents can use to try to
|
||||||
|
infect this agent. DumbViewer always gets infected, but other agents like
|
||||||
|
HerdViewer might not become infected right away
|
||||||
|
"""
|
||||||
|
self.has_been_infected = True
|
||||||
|
|
||||||
|
|
||||||
|
class HerdViewer(DumbViewer):
|
||||||
|
"""
|
||||||
|
A viewer whose probability of infection depends on the state of its neighbors.
|
||||||
|
"""
|
||||||
|
|
||||||
|
def infect(self):
|
||||||
|
"""Notice again that this is NOT a state. See DumbViewer.infect for reference"""
|
||||||
|
infected = self.count_neighbors(state_id=self.infected.id)
|
||||||
|
total = self.count_neighbors()
|
||||||
|
prob_infect = self.get("prob_neighbor_spread") * infected / total
|
||||||
|
self.debug("prob_infect", prob_infect)
|
||||||
|
if self.prob(prob_infect):
|
||||||
|
self.has_been_infected = True
|
||||||
|
|
||||||
|
|
||||||
|
class WiseViewer(HerdViewer):
|
||||||
|
"""
|
||||||
|
A viewer that can change its mind.
|
||||||
|
"""
|
||||||
|
|
||||||
|
@state
|
||||||
|
def cured(self):
|
||||||
|
prob_cure = self.get("prob_neighbor_cure")
|
||||||
|
for neighbor in self.get_neighbors(state_id=self.infected.id):
|
||||||
|
if self.prob(prob_cure):
|
||||||
|
try:
|
||||||
|
neighbor.cure()
|
||||||
|
except AttributeError:
|
||||||
|
self.debug("Viewer {} cannot be cured".format(neighbor.id))
|
||||||
|
|
||||||
|
def cure(self):
|
||||||
|
self.has_been_cured = True
|
||||||
|
|
||||||
|
@state
|
||||||
|
def infected(self):
|
||||||
|
if self.has_been_cured:
|
||||||
|
return self.cured
|
||||||
|
cured = max(self.count_neighbors(self.cured.id), 1.0)
|
||||||
|
infected = max(self.count_neighbors(self.infected.id), 1.0)
|
||||||
|
prob_cure = self.get("prob_neighbor_cure") * (cured / infected)
|
||||||
|
if self.prob(prob_cure):
|
||||||
|
return self.cured
|
||||||
|
|
||||||
|
|
||||||
|
class NewsSpread(Environment):
|
||||||
|
ratio_dumb: probability = 1,
|
||||||
|
ratio_herd: probability = 0,
|
||||||
|
ratio_wise: probability = 0,
|
||||||
|
prob_tv_spread: probability = 0.1,
|
||||||
|
prob_neighbor_spread: probability = 0.1,
|
||||||
|
prob_neighbor_cure: probability = 0.05,
|
||||||
|
|
||||||
|
def init(self):
|
||||||
|
self.populate_network([DumbViewer, HerdViewer, WiseViewer],
|
||||||
|
[self.ratio_dumb, self.ratio_herd, self.ratio_wise])
|
||||||
|
|
||||||
|
|
||||||
|
from itertools import product
|
||||||
|
from soil import Simulation
|
||||||
|
|
||||||
|
|
||||||
|
# We want to investigate the effect of different agent distributions on the spread of news.
|
||||||
|
# To do that, we will run different simulations, with a varying ratio of DumbViewers, HerdViewers, and WiseViewers
|
||||||
|
# Because the effect of these agents might also depend on the network structure, we will run our simulations on two different networks:
|
||||||
|
# one with a small-world structure and one with a connected structure.
|
||||||
|
|
||||||
|
counter = 0
|
||||||
|
for [r1, r2] in product([0, 0.5, 1.0], repeat=2):
|
||||||
|
for (generator, netparams) in {
|
||||||
|
"barabasi_albert_graph": {"m": 5},
|
||||||
|
"erdos_renyi_graph": {"p": 0.1},
|
||||||
|
}.items():
|
||||||
|
print(r1, r2, 1-r1-r2, generator)
|
||||||
|
# Create new simulation
|
||||||
|
netparams["n"] = 500
|
||||||
|
Simulation(
|
||||||
|
name='newspread_sim',
|
||||||
|
model=NewsSpread,
|
||||||
|
parameters=dict(
|
||||||
|
ratio_dumb=r1,
|
||||||
|
ratio_herd=r2,
|
||||||
|
ratio_wise=1-r1-r2,
|
||||||
|
network_generator=generator,
|
||||||
|
network_params=netparams,
|
||||||
|
prob_neighbor_spread=0,
|
||||||
|
),
|
||||||
|
iterations=5,
|
||||||
|
max_steps=300,
|
||||||
|
dump=False,
|
||||||
|
).run()
|
||||||
|
counter += 1
|
||||||
|
# Run all the necessary instances
|
||||||
|
|
||||||
|
print(f"A total of {counter} simulations were run.")
|
@ -1,40 +0,0 @@
|
|||||||
'''
|
|
||||||
Example of a fully programmatic simulation, without definition files.
|
|
||||||
'''
|
|
||||||
from soil import Simulation, agents
|
|
||||||
from networkx import Graph
|
|
||||||
import logging
|
|
||||||
|
|
||||||
|
|
||||||
def mygenerator():
|
|
||||||
# Add only a node
|
|
||||||
G = Graph()
|
|
||||||
G.add_node(1)
|
|
||||||
return G
|
|
||||||
|
|
||||||
|
|
||||||
class MyAgent(agents.FSM):
|
|
||||||
|
|
||||||
@agents.default_state
|
|
||||||
@agents.state
|
|
||||||
def neutral(self):
|
|
||||||
self.debug('I am running')
|
|
||||||
if agents.prob(0.2):
|
|
||||||
self.info('This runs 2/10 times on average')
|
|
||||||
|
|
||||||
|
|
||||||
s = Simulation(name='Programmatic',
|
|
||||||
network_params={'generator': mygenerator},
|
|
||||||
num_trials=1,
|
|
||||||
max_time=100,
|
|
||||||
agent_type=MyAgent,
|
|
||||||
dry_run=True)
|
|
||||||
|
|
||||||
|
|
||||||
# By default, logging will only print WARNING logs (and above).
|
|
||||||
# You need to choose a lower logging level to get INFO/DEBUG traces
|
|
||||||
logging.basicConfig(level=logging.INFO)
|
|
||||||
envs = s.run()
|
|
||||||
|
|
||||||
# Uncomment this to output the simulation to a YAML file
|
|
||||||
# s.dump_yaml('simulation.yaml')
|
|
53
examples/programmatic/programmatic_sim.py
Normal file
@ -0,0 +1,53 @@
|
|||||||
|
"""
|
||||||
|
Example of a fully programmatic simulation, without definition files.
|
||||||
|
"""
|
||||||
|
from soil import Simulation, Environment, agents
|
||||||
|
from networkx import Graph
|
||||||
|
import logging
|
||||||
|
|
||||||
|
|
||||||
|
def mygenerator():
|
||||||
|
# Add only a node
|
||||||
|
G = Graph()
|
||||||
|
G.add_node(1)
|
||||||
|
G.add_node(2)
|
||||||
|
return G
|
||||||
|
|
||||||
|
|
||||||
|
class MyAgent(agents.NetworkAgent, agents.FSM):
|
||||||
|
times_run = 0
|
||||||
|
@agents.default_state
|
||||||
|
@agents.state
|
||||||
|
def neutral(self):
|
||||||
|
self.debug("I am running")
|
||||||
|
if self.prob(0.2):
|
||||||
|
self.times_run += 1
|
||||||
|
self.info("This runs 2/10 times on average")
|
||||||
|
|
||||||
|
|
||||||
|
class ProgrammaticEnv(Environment):
|
||||||
|
|
||||||
|
def init(self):
|
||||||
|
self.create_network(generator=mygenerator)
|
||||||
|
assert len(self.G)
|
||||||
|
self.populate_network(agent_class=MyAgent)
|
||||||
|
self.add_agent_reporter('times_run')
|
||||||
|
|
||||||
|
|
||||||
|
simulation = Simulation(
|
||||||
|
name="Programmatic",
|
||||||
|
model=ProgrammaticEnv,
|
||||||
|
seed='Program',
|
||||||
|
iterations=1,
|
||||||
|
max_time=100,
|
||||||
|
dump=False,
|
||||||
|
)
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
# By default, logging will only print WARNING logs (and above).
|
||||||
|
# You need to choose a lower logging level to get INFO/DEBUG traces
|
||||||
|
logging.basicConfig(level=logging.INFO)
|
||||||
|
envs = simulation.run()
|
||||||
|
|
||||||
|
for agent in envs[0].agents:
|
||||||
|
print(agent.times_run)
|
@ -1,175 +0,0 @@
|
|||||||
from soil.agents import FSM, state, default_state
|
|
||||||
from soil import Environment
|
|
||||||
from random import random, shuffle
|
|
||||||
from itertools import islice
|
|
||||||
import logging
|
|
||||||
|
|
||||||
|
|
||||||
class CityPubs(Environment):
|
|
||||||
'''Environment with Pubs'''
|
|
||||||
level = logging.INFO
|
|
||||||
|
|
||||||
def __init__(self, *args, number_of_pubs=3, pub_capacity=10, **kwargs):
|
|
||||||
super(CityPubs, self).__init__(*args, **kwargs)
|
|
||||||
pubs = {}
|
|
||||||
for i in range(number_of_pubs):
|
|
||||||
newpub = {
|
|
||||||
'name': 'The awesome pub #{}'.format(i),
|
|
||||||
'open': True,
|
|
||||||
'capacity': pub_capacity,
|
|
||||||
'occupancy': 0,
|
|
||||||
}
|
|
||||||
pubs[newpub['name']] = newpub
|
|
||||||
self['pubs'] = pubs
|
|
||||||
|
|
||||||
def enter(self, pub_id, *nodes):
|
|
||||||
'''Agents will try to enter. The pub checks if it is possible'''
|
|
||||||
try:
|
|
||||||
pub = self['pubs'][pub_id]
|
|
||||||
except KeyError:
|
|
||||||
raise ValueError('Pub {} is not available'.format(pub_id))
|
|
||||||
if not pub['open'] or (pub['capacity'] < (len(nodes) + pub['occupancy'])):
|
|
||||||
return False
|
|
||||||
pub['occupancy'] += len(nodes)
|
|
||||||
for node in nodes:
|
|
||||||
node['pub'] = pub_id
|
|
||||||
return True
|
|
||||||
|
|
||||||
def available_pubs(self):
|
|
||||||
for pub in self['pubs'].values():
|
|
||||||
if pub['open'] and (pub['occupancy'] < pub['capacity']):
|
|
||||||
yield pub['name']
|
|
||||||
|
|
||||||
def exit(self, pub_id, *node_ids):
|
|
||||||
'''Agents will notify the pub they want to leave'''
|
|
||||||
try:
|
|
||||||
pub = self['pubs'][pub_id]
|
|
||||||
except KeyError:
|
|
||||||
raise ValueError('Pub {} is not available'.format(pub_id))
|
|
||||||
for node_id in node_ids:
|
|
||||||
node = self.get_agent(node_id)
|
|
||||||
if pub_id == node['pub']:
|
|
||||||
del node['pub']
|
|
||||||
pub['occupancy'] -= 1
|
|
||||||
|
|
||||||
|
|
||||||
class Patron(FSM):
|
|
||||||
'''Agent that looks for friends to drink with. It will do three things:
|
|
||||||
1) Look for other patrons to drink with
|
|
||||||
2) Look for a bar where the agent and other agents in the same group can get in.
|
|
||||||
3) While in the bar, patrons only drink, until they get drunk and taken home.
|
|
||||||
'''
|
|
||||||
level = logging.DEBUG
|
|
||||||
|
|
||||||
defaults = {
|
|
||||||
'pub': None,
|
|
||||||
'drunk': False,
|
|
||||||
'pints': 0,
|
|
||||||
'max_pints': 3,
|
|
||||||
}
|
|
||||||
|
|
||||||
@default_state
|
|
||||||
@state
|
|
||||||
def looking_for_friends(self):
|
|
||||||
'''Look for friends to drink with'''
|
|
||||||
self.info('I am looking for friends')
|
|
||||||
available_friends = list(self.get_agents(drunk=False,
|
|
||||||
pub=None,
|
|
||||||
state_id=self.looking_for_friends.id))
|
|
||||||
if not available_friends:
|
|
||||||
self.info('Life sucks and I\'m alone!')
|
|
||||||
return self.at_home
|
|
||||||
befriended = self.try_friends(available_friends)
|
|
||||||
if befriended:
|
|
||||||
return self.looking_for_pub
|
|
||||||
|
|
||||||
@state
|
|
||||||
def looking_for_pub(self):
|
|
||||||
'''Look for a pub that accepts me and my friends'''
|
|
||||||
if self['pub'] != None:
|
|
||||||
return self.sober_in_pub
|
|
||||||
self.debug('I am looking for a pub')
|
|
||||||
group = list(self.get_neighboring_agents())
|
|
||||||
for pub in self.env.available_pubs():
|
|
||||||
self.debug('We\'re trying to get into {}: total: {}'.format(pub, len(group)))
|
|
||||||
if self.env.enter(pub, self, *group):
|
|
||||||
self.info('We\'re all {} getting in {}!'.format(len(group), pub))
|
|
||||||
return self.sober_in_pub
|
|
||||||
|
|
||||||
@state
|
|
||||||
def sober_in_pub(self):
|
|
||||||
'''Drink up.'''
|
|
||||||
self.drink()
|
|
||||||
if self['pints'] > self['max_pints']:
|
|
||||||
return self.drunk_in_pub
|
|
||||||
|
|
||||||
@state
|
|
||||||
def drunk_in_pub(self):
|
|
||||||
'''I'm out. Take me home!'''
|
|
||||||
self.info('I\'m so drunk. Take me home!')
|
|
||||||
self['drunk'] = True
|
|
||||||
pass # out drunk
|
|
||||||
|
|
||||||
@state
|
|
||||||
def at_home(self):
|
|
||||||
'''The end'''
|
|
||||||
others = self.get_agents(state_id=Patron.at_home.id, limit_neighbors=True)
|
|
||||||
self.debug('I\'m home. Just like {} of my friends'.format(len(others)))
|
|
||||||
|
|
||||||
def drink(self):
|
|
||||||
self['pints'] += 1
|
|
||||||
self.debug('Cheers to that')
|
|
||||||
|
|
||||||
def kick_out(self):
|
|
||||||
self.set_state(self.at_home)
|
|
||||||
|
|
||||||
def befriend(self, other_agent, force=False):
|
|
||||||
'''
|
|
||||||
Try to become friends with another agent. The chances of
|
|
||||||
success depend on both agents' openness.
|
|
||||||
'''
|
|
||||||
if force or self['openness'] > random():
|
|
||||||
self.env.add_edge(self, other_agent)
|
|
||||||
self.info('Made some friend {}'.format(other_agent))
|
|
||||||
return True
|
|
||||||
return False
|
|
||||||
|
|
||||||
def try_friends(self, others):
|
|
||||||
''' Look for random agents around me and try to befriend them'''
|
|
||||||
befriended = False
|
|
||||||
k = int(10*self['openness'])
|
|
||||||
shuffle(others)
|
|
||||||
for friend in islice(others, k): # random.choice >= 3.7
|
|
||||||
if friend == self:
|
|
||||||
continue
|
|
||||||
if friend.befriend(self):
|
|
||||||
self.befriend(friend, force=True)
|
|
||||||
self.debug('Hooray! new friend: {}'.format(friend.id))
|
|
||||||
befriended = True
|
|
||||||
else:
|
|
||||||
self.debug('{} does not want to be friends'.format(friend.id))
|
|
||||||
return befriended
|
|
||||||
|
|
||||||
|
|
||||||
class Police(FSM):
|
|
||||||
'''Simple agent to take drunk people out of pubs.'''
|
|
||||||
level = logging.INFO
|
|
||||||
|
|
||||||
@default_state
|
|
||||||
@state
|
|
||||||
def patrol(self):
|
|
||||||
drunksters = list(self.get_agents(drunk=True,
|
|
||||||
state_id=Patron.drunk_in_pub.id))
|
|
||||||
for drunk in drunksters:
|
|
||||||
self.info('Kicking out the trash: {}'.format(drunk.id))
|
|
||||||
drunk.kick_out()
|
|
||||||
else:
|
|
||||||
self.info('No trash to take out. Too bad.')
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == '__main__':
|
|
||||||
from soil import simulation
|
|
||||||
simulation.run_from_config('pubcrawl.yml',
|
|
||||||
dry_run=True,
|
|
||||||
dump=None,
|
|
||||||
parallel=False)
|
|
@ -1,26 +0,0 @@
|
|||||||
---
|
|
||||||
name: pubcrawl
|
|
||||||
num_trials: 3
|
|
||||||
max_time: 10
|
|
||||||
dump: false
|
|
||||||
network_params:
|
|
||||||
# Generate 100 empty nodes. They will be assigned a network agent
|
|
||||||
generator: empty_graph
|
|
||||||
n: 30
|
|
||||||
network_agents:
|
|
||||||
- agent_type: pubcrawl.Patron
|
|
||||||
description: Extroverted patron
|
|
||||||
state:
|
|
||||||
openness: 1.0
|
|
||||||
weight: 9
|
|
||||||
- agent_type: pubcrawl.Patron
|
|
||||||
description: Introverted patron
|
|
||||||
state:
|
|
||||||
openness: 0.1
|
|
||||||
weight: 1
|
|
||||||
environment_agents:
|
|
||||||
- agent_type: pubcrawl.Police
|
|
||||||
environment_class: pubcrawl.CityPubs
|
|
||||||
environment_params:
|
|
||||||
altercations: 0
|
|
||||||
number_of_pubs: 3
|
|
195
examples/pubcrawl/pubcrawl_sim.py
Normal file
@ -0,0 +1,195 @@
|
|||||||
|
from soil.agents import FSM, NetworkAgent, state, default_state
|
||||||
|
from soil import Environment, Simulation, parameters
|
||||||
|
from itertools import islice
|
||||||
|
import networkx as nx
|
||||||
|
import logging
|
||||||
|
|
||||||
|
|
||||||
|
class CityPubs(Environment):
|
||||||
|
"""Environment with Pubs"""
|
||||||
|
|
||||||
|
level = logging.INFO
|
||||||
|
number_of_pubs: parameters.Integer = 3
|
||||||
|
ratio_extroverted: parameters.probability = 0.1
|
||||||
|
pub_capacity: parameters.Integer = 10
|
||||||
|
|
||||||
|
def init(self):
|
||||||
|
self.pubs = {}
|
||||||
|
for i in range(self.number_of_pubs):
|
||||||
|
newpub = {
|
||||||
|
"name": "The awesome pub #{}".format(i),
|
||||||
|
"open": True,
|
||||||
|
"capacity": self.pub_capacity,
|
||||||
|
"occupancy": 0,
|
||||||
|
}
|
||||||
|
self.pubs[newpub["name"]] = newpub
|
||||||
|
self.add_agent(agent_class=Police)
|
||||||
|
self.populate_network([Patron.w(openness=0.1), Patron.w(openness=1)],
|
||||||
|
[self.ratio_extroverted, 1-self.ratio_extroverted])
|
||||||
|
assert all(["agent" in node and isinstance(node["agent"], Patron) for (_, node) in self.G.nodes(data=True)])
|
||||||
|
|
||||||
|
def enter(self, pub_id, *nodes):
|
||||||
|
"""Agents will try to enter. The pub checks if it is possible"""
|
||||||
|
try:
|
||||||
|
pub = self["pubs"][pub_id]
|
||||||
|
except KeyError:
|
||||||
|
raise ValueError("Pub {} is not available".format(pub_id))
|
||||||
|
if not pub["open"] or (pub["capacity"] < (len(nodes) + pub["occupancy"])):
|
||||||
|
return False
|
||||||
|
pub["occupancy"] += len(nodes)
|
||||||
|
for node in nodes:
|
||||||
|
node["pub"] = pub_id
|
||||||
|
return True
|
||||||
|
|
||||||
|
def available_pubs(self):
|
||||||
|
for pub in self["pubs"].values():
|
||||||
|
if pub["open"] and (pub["occupancy"] < pub["capacity"]):
|
||||||
|
yield pub["name"]
|
||||||
|
|
||||||
|
def exit(self, pub_id, *node_ids):
|
||||||
|
"""Agents will notify the pub they want to leave"""
|
||||||
|
try:
|
||||||
|
pub = self["pubs"][pub_id]
|
||||||
|
except KeyError:
|
||||||
|
raise ValueError("Pub {} is not available".format(pub_id))
|
||||||
|
for node_id in node_ids:
|
||||||
|
node = self.get_agent(node_id)
|
||||||
|
if pub_id == node["pub"]:
|
||||||
|
del node["pub"]
|
||||||
|
pub["occupancy"] -= 1
|
||||||
|
|
||||||
|
|
||||||
|
class Patron(FSM, NetworkAgent):
|
||||||
|
"""Agent that looks for friends to drink with. It will do three things:
|
||||||
|
1) Look for other patrons to drink with
|
||||||
|
2) Look for a bar where the agent and other agents in the same group can get in.
|
||||||
|
3) While in the bar, patrons only drink, until they get drunk and taken home.
|
||||||
|
"""
|
||||||
|
|
||||||
|
level = logging.DEBUG
|
||||||
|
|
||||||
|
pub = None
|
||||||
|
drunk = False
|
||||||
|
pints = 0
|
||||||
|
max_pints = 3
|
||||||
|
kicked_out = False
|
||||||
|
|
||||||
|
@default_state
|
||||||
|
@state
|
||||||
|
def looking_for_friends(self):
|
||||||
|
"""Look for friends to drink with"""
|
||||||
|
self.info("I am looking for friends")
|
||||||
|
available_friends = list(
|
||||||
|
self.get_agents(drunk=False, pub=None, state_id=self.looking_for_friends.id)
|
||||||
|
)
|
||||||
|
if not available_friends:
|
||||||
|
self.info("Life sucks and I'm alone!")
|
||||||
|
return self.at_home
|
||||||
|
befriended = self.try_friends(available_friends)
|
||||||
|
if befriended:
|
||||||
|
return self.looking_for_pub
|
||||||
|
|
||||||
|
@state
|
||||||
|
def looking_for_pub(self):
|
||||||
|
"""Look for a pub that accepts me and my friends"""
|
||||||
|
if self["pub"] != None:
|
||||||
|
return self.sober_in_pub
|
||||||
|
self.debug("I am looking for a pub")
|
||||||
|
group = list(self.get_neighbors())
|
||||||
|
for pub in self.model.available_pubs():
|
||||||
|
self.debug("We're trying to get into {}: total: {}".format(pub, len(group)))
|
||||||
|
if self.model.enter(pub, self, *group):
|
||||||
|
self.info("We're all {} getting in {}!".format(len(group), pub))
|
||||||
|
return self.sober_in_pub
|
||||||
|
|
||||||
|
@state
|
||||||
|
def sober_in_pub(self):
|
||||||
|
"""Drink up."""
|
||||||
|
self.drink()
|
||||||
|
if self["pints"] > self["max_pints"]:
|
||||||
|
return self.drunk_in_pub
|
||||||
|
|
||||||
|
@state
|
||||||
|
def drunk_in_pub(self):
|
||||||
|
"""I'm out. Take me home!"""
|
||||||
|
self.info("I'm so drunk. Take me home!")
|
||||||
|
self["drunk"] = True
|
||||||
|
if self.kicked_out:
|
||||||
|
return self.at_home
|
||||||
|
pass # out drun
|
||||||
|
|
||||||
|
@state
|
||||||
|
def at_home(self):
|
||||||
|
"""The end"""
|
||||||
|
others = self.get_agents(state_id=Patron.at_home.id, limit_neighbors=True)
|
||||||
|
self.debug("I'm home. Just like {} of my friends".format(len(others)))
|
||||||
|
|
||||||
|
def drink(self):
|
||||||
|
self["pints"] += 1
|
||||||
|
self.debug("Cheers to that")
|
||||||
|
|
||||||
|
def kick_out(self):
|
||||||
|
self.kicked_out = True
|
||||||
|
|
||||||
|
def befriend(self, other_agent, force=False):
|
||||||
|
"""
|
||||||
|
Try to become friends with another agent. The chances of
|
||||||
|
success depend on both agents' openness.
|
||||||
|
"""
|
||||||
|
if force or self["openness"] > self.random.random():
|
||||||
|
self.add_edge(self, other_agent)
|
||||||
|
self.info("Made some friend {}".format(other_agent))
|
||||||
|
return True
|
||||||
|
return False
|
||||||
|
|
||||||
|
def try_friends(self, others):
|
||||||
|
"""Look for random agents around me and try to befriend them"""
|
||||||
|
befriended = False
|
||||||
|
k = int(10 * self["openness"])
|
||||||
|
self.random.shuffle(others)
|
||||||
|
for friend in islice(others, k): # random.choice >= 3.7
|
||||||
|
if friend == self:
|
||||||
|
continue
|
||||||
|
if friend.befriend(self):
|
||||||
|
self.befriend(friend, force=True)
|
||||||
|
self.debug("Hooray! new friend: {}".format(friend.unique_id))
|
||||||
|
befriended = True
|
||||||
|
else:
|
||||||
|
self.debug("{} does not want to be friends".format(friend.unique_id))
|
||||||
|
return befriended
|
||||||
|
|
||||||
|
|
||||||
|
class Police(FSM):
|
||||||
|
"""Simple agent to take drunk people out of pubs."""
|
||||||
|
|
||||||
|
level = logging.INFO
|
||||||
|
|
||||||
|
@default_state
|
||||||
|
@state
|
||||||
|
def patrol(self):
|
||||||
|
drunksters = list(self.get_agents(drunk=True, state_id=Patron.drunk_in_pub.id))
|
||||||
|
for drunk in drunksters:
|
||||||
|
self.info("Kicking out the trash: {}".format(drunk.unique_id))
|
||||||
|
drunk.kick_out()
|
||||||
|
else:
|
||||||
|
self.info("No trash to take out. Too bad.")
|
||||||
|
|
||||||
|
|
||||||
|
sim = Simulation(
|
||||||
|
model=CityPubs,
|
||||||
|
name="pubcrawl",
|
||||||
|
iterations=3,
|
||||||
|
max_steps=10,
|
||||||
|
dump=False,
|
||||||
|
parameters=dict(
|
||||||
|
network_generator=nx.empty_graph,
|
||||||
|
network_params={"n": 30},
|
||||||
|
model=CityPubs,
|
||||||
|
altercations=0,
|
||||||
|
number_of_pubs=3,
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
sim.run(parallel=False)
|
14
examples/rabbits/README.md
Normal file
@ -0,0 +1,14 @@
|
|||||||
|
There are two similar implementations of this simulation.
|
||||||
|
|
||||||
|
- `basic`. Using simple primites
|
||||||
|
- `improved`. Using more advanced features such as the `time` module to avoid unnecessary computations (i.e., skip steps), and generator functions.
|
||||||
|
|
||||||
|
The examples can be run directly in the terminal, and they accept command like arguments.
|
||||||
|
For example, to enable the CSV exporter and the Summary exporter, while setting `max_time` to `100` and `seed` to `CustomSeed`:
|
||||||
|
|
||||||
|
```
|
||||||
|
python rabbit_agents.py --set max_time=100 --csv -e summary --set 'seed="CustomSeed"'
|
||||||
|
```
|
||||||
|
|
||||||
|
To learn more about how this functionality works, check out the `soil.easy` function.
|
||||||
|
|
@ -1,135 +0,0 @@
|
|||||||
from soil.agents import FSM, state, default_state, BaseAgent, NetworkAgent
|
|
||||||
from enum import Enum
|
|
||||||
from random import random, choice
|
|
||||||
import logging
|
|
||||||
import math
|
|
||||||
|
|
||||||
|
|
||||||
class Genders(Enum):
|
|
||||||
male = 'male'
|
|
||||||
female = 'female'
|
|
||||||
|
|
||||||
|
|
||||||
class RabbitModel(FSM):
|
|
||||||
|
|
||||||
defaults = {
|
|
||||||
'age': 0,
|
|
||||||
'gender': Genders.male.value,
|
|
||||||
'mating_prob': 0.001,
|
|
||||||
'offspring': 0,
|
|
||||||
}
|
|
||||||
|
|
||||||
sexual_maturity = 3 #4*30
|
|
||||||
life_expectancy = 365 * 3
|
|
||||||
gestation = 33
|
|
||||||
pregnancy = -1
|
|
||||||
max_females = 5
|
|
||||||
|
|
||||||
@default_state
|
|
||||||
@state
|
|
||||||
def newborn(self):
|
|
||||||
self.debug(f'I am a newborn at age {self["age"]}')
|
|
||||||
self['age'] += 1
|
|
||||||
|
|
||||||
if self['age'] >= self.sexual_maturity:
|
|
||||||
self.debug('I am fertile!')
|
|
||||||
return self.fertile
|
|
||||||
@state
|
|
||||||
def fertile(self):
|
|
||||||
raise Exception("Each subclass should define its fertile state")
|
|
||||||
|
|
||||||
@state
|
|
||||||
def dead(self):
|
|
||||||
self.info('Agent {} is dying'.format(self.id))
|
|
||||||
self.die()
|
|
||||||
|
|
||||||
|
|
||||||
class Male(RabbitModel):
|
|
||||||
|
|
||||||
@state
|
|
||||||
def fertile(self):
|
|
||||||
self['age'] += 1
|
|
||||||
if self['age'] > self.life_expectancy:
|
|
||||||
return self.dead
|
|
||||||
|
|
||||||
if self['gender'] == Genders.female.value:
|
|
||||||
return
|
|
||||||
|
|
||||||
# Males try to mate
|
|
||||||
for f in self.get_agents(state_id=Female.fertile.id,
|
|
||||||
agent_type=Female,
|
|
||||||
limit_neighbors=False,
|
|
||||||
limit=self.max_females):
|
|
||||||
r = random()
|
|
||||||
if r < self['mating_prob']:
|
|
||||||
self.impregnate(f)
|
|
||||||
break # Take a break
|
|
||||||
def impregnate(self, whom):
|
|
||||||
whom['pregnancy'] = 0
|
|
||||||
whom['mate'] = self.id
|
|
||||||
whom.set_state(whom.pregnant)
|
|
||||||
self.debug('{} impregnating: {}. {}'.format(self.id, whom.id, whom.state))
|
|
||||||
|
|
||||||
class Female(RabbitModel):
|
|
||||||
@state
|
|
||||||
def fertile(self):
|
|
||||||
# Just wait for a Male
|
|
||||||
pass
|
|
||||||
|
|
||||||
@state
|
|
||||||
def pregnant(self):
|
|
||||||
self['age'] += 1
|
|
||||||
if self['age'] > self.life_expectancy:
|
|
||||||
return self.dead
|
|
||||||
|
|
||||||
self['pregnancy'] += 1
|
|
||||||
self.debug('Pregnancy: {}'.format(self['pregnancy']))
|
|
||||||
if self['pregnancy'] >= self.gestation:
|
|
||||||
number_of_babies = int(8+4*random())
|
|
||||||
self.info('Having {} babies'.format(number_of_babies))
|
|
||||||
for i in range(number_of_babies):
|
|
||||||
state = {}
|
|
||||||
state['gender'] = choice(list(Genders)).value
|
|
||||||
child = self.env.add_node(self.__class__, state)
|
|
||||||
self.env.add_edge(self.id, child.id)
|
|
||||||
self.env.add_edge(self['mate'], child.id)
|
|
||||||
# self.add_edge()
|
|
||||||
self.debug('A BABY IS COMING TO LIFE')
|
|
||||||
self.env['rabbits_alive'] = self.env.get('rabbits_alive', self.topology.number_of_nodes())+1
|
|
||||||
self.debug('Rabbits alive: {}'.format(self.env['rabbits_alive']))
|
|
||||||
self['offspring'] += 1
|
|
||||||
self.env.get_agent(self['mate'])['offspring'] += 1
|
|
||||||
del self['mate']
|
|
||||||
self['pregnancy'] = -1
|
|
||||||
return self.fertile
|
|
||||||
|
|
||||||
@state
|
|
||||||
def dead(self):
|
|
||||||
super().dead()
|
|
||||||
if 'pregnancy' in self and self['pregnancy'] > -1:
|
|
||||||
self.info('A mother has died carrying a baby!!')
|
|
||||||
|
|
||||||
|
|
||||||
class RandomAccident(NetworkAgent):
|
|
||||||
|
|
||||||
level = logging.DEBUG
|
|
||||||
|
|
||||||
def step(self):
|
|
||||||
rabbits_total = self.topology.number_of_nodes()
|
|
||||||
if 'rabbits_alive' not in self.env:
|
|
||||||
self.env['rabbits_alive'] = 0
|
|
||||||
rabbits_alive = self.env.get('rabbits_alive', rabbits_total)
|
|
||||||
prob_death = self.env.get('prob_death', 1e-100)*math.floor(math.log10(max(1, rabbits_alive)))
|
|
||||||
self.debug('Killing some rabbits with prob={}!'.format(prob_death))
|
|
||||||
for i in self.env.network_agents:
|
|
||||||
if i.state['id'] == i.dead.id:
|
|
||||||
continue
|
|
||||||
r = random()
|
|
||||||
if r < prob_death:
|
|
||||||
self.debug('I killed a rabbit: {}'.format(i.id))
|
|
||||||
rabbits_alive = self.env['rabbits_alive'] = rabbits_alive -1
|
|
||||||
self.log('Rabbits alive: {}'.format(self.env['rabbits_alive']))
|
|
||||||
i.set_state(i.dead)
|
|
||||||
self.log('Rabbits alive: {}/{}'.format(rabbits_alive, rabbits_total))
|
|
||||||
if self.count_agents(state_id=RabbitModel.dead.id) == self.topology.number_of_nodes():
|
|
||||||
self.die()
|
|
153
examples/rabbits/rabbit_improved_sim.py
Normal file
@ -0,0 +1,153 @@
|
|||||||
|
from soil import FSM, state, default_state, BaseAgent, NetworkAgent, Environment, Simulation
|
||||||
|
from soil.time import Delta
|
||||||
|
from enum import Enum
|
||||||
|
from collections import Counter
|
||||||
|
import logging
|
||||||
|
import math
|
||||||
|
|
||||||
|
from rabbits_basic_sim import RabbitEnv
|
||||||
|
|
||||||
|
|
||||||
|
class RabbitsImprovedEnv(RabbitEnv):
|
||||||
|
def init(self):
|
||||||
|
"""Initialize the environment with the new versions of the agents"""
|
||||||
|
a1 = self.add_node(Male)
|
||||||
|
a2 = self.add_node(Female)
|
||||||
|
a1.add_edge(a2)
|
||||||
|
self.add_agent(RandomAccident)
|
||||||
|
|
||||||
|
|
||||||
|
class Rabbit(FSM, NetworkAgent):
|
||||||
|
|
||||||
|
sexual_maturity = 30
|
||||||
|
life_expectancy = 300
|
||||||
|
birth = None
|
||||||
|
|
||||||
|
@property
|
||||||
|
def age(self):
|
||||||
|
if self.birth is None:
|
||||||
|
return None
|
||||||
|
return self.now - self.birth
|
||||||
|
|
||||||
|
@default_state
|
||||||
|
@state
|
||||||
|
def newborn(self):
|
||||||
|
self.info("I am a newborn.")
|
||||||
|
self.birth = self.now
|
||||||
|
self.offspring = 0
|
||||||
|
return self.youngling, Delta(self.sexual_maturity - self.age)
|
||||||
|
|
||||||
|
@state
|
||||||
|
def youngling(self):
|
||||||
|
if self.age >= self.sexual_maturity:
|
||||||
|
self.info(f"I am fertile! My age is {self.age}")
|
||||||
|
return self.fertile
|
||||||
|
|
||||||
|
@state
|
||||||
|
def fertile(self):
|
||||||
|
raise Exception("Each subclass should define its fertile state")
|
||||||
|
|
||||||
|
@state
|
||||||
|
def dead(self):
|
||||||
|
self.die()
|
||||||
|
|
||||||
|
|
||||||
|
class Male(Rabbit):
|
||||||
|
max_females = 5
|
||||||
|
mating_prob = 0.001
|
||||||
|
|
||||||
|
@state
|
||||||
|
def fertile(self):
|
||||||
|
if self.age > self.life_expectancy:
|
||||||
|
return self.dead
|
||||||
|
|
||||||
|
# Males try to mate
|
||||||
|
for f in self.model.agents(
|
||||||
|
agent_class=Female, state_id=Female.fertile.id, limit=self.max_females
|
||||||
|
):
|
||||||
|
self.debug("FOUND A FEMALE: ", repr(f), self.mating_prob)
|
||||||
|
if self.prob(self["mating_prob"]):
|
||||||
|
f.impregnate(self)
|
||||||
|
break # Do not try to impregnate other females
|
||||||
|
|
||||||
|
|
||||||
|
class Female(Rabbit):
|
||||||
|
gestation = 10
|
||||||
|
conception = None
|
||||||
|
|
||||||
|
@state
|
||||||
|
def fertile(self):
|
||||||
|
# Just wait for a Male
|
||||||
|
if self.age > self.life_expectancy:
|
||||||
|
return self.dead
|
||||||
|
if self.conception is not None:
|
||||||
|
return self.pregnant
|
||||||
|
|
||||||
|
@property
|
||||||
|
def pregnancy(self):
|
||||||
|
if self.conception is None:
|
||||||
|
return None
|
||||||
|
return self.now - self.conception
|
||||||
|
|
||||||
|
def impregnate(self, male):
|
||||||
|
self.info(f"impregnated by {repr(male)}")
|
||||||
|
self.mate = male
|
||||||
|
self.conception = self.now
|
||||||
|
self.number_of_babies = int(8 + 4 * self.random.random())
|
||||||
|
|
||||||
|
@state
|
||||||
|
def pregnant(self):
|
||||||
|
self.debug("I am pregnant")
|
||||||
|
|
||||||
|
if self.age > self.life_expectancy:
|
||||||
|
self.info("Dying before giving birth")
|
||||||
|
return self.die()
|
||||||
|
|
||||||
|
if self.pregnancy >= self.gestation:
|
||||||
|
self.info("Having {} babies".format(self.number_of_babies))
|
||||||
|
for i in range(self.number_of_babies):
|
||||||
|
state = {}
|
||||||
|
agent_class = self.random.choice([Male, Female])
|
||||||
|
child = self.model.add_node(agent_class=agent_class, **state)
|
||||||
|
child.add_edge(self)
|
||||||
|
if self.mate:
|
||||||
|
child.add_edge(self.mate)
|
||||||
|
self.mate.offspring += 1
|
||||||
|
else:
|
||||||
|
self.debug("The father has passed away")
|
||||||
|
|
||||||
|
self.offspring += 1
|
||||||
|
self.mate = None
|
||||||
|
return self.fertile
|
||||||
|
|
||||||
|
def die(self):
|
||||||
|
if self.pregnancy is not None:
|
||||||
|
self.info("A mother has died carrying a baby!!")
|
||||||
|
return super().die()
|
||||||
|
|
||||||
|
|
||||||
|
class RandomAccident(BaseAgent):
|
||||||
|
def step(self):
|
||||||
|
rabbits_alive = self.model.G.number_of_nodes()
|
||||||
|
|
||||||
|
if not rabbits_alive:
|
||||||
|
return self.die()
|
||||||
|
|
||||||
|
prob_death = self.model.get("prob_death", 1e-100) * math.floor(
|
||||||
|
math.log10(max(1, rabbits_alive))
|
||||||
|
)
|
||||||
|
self.debug("Killing some rabbits with prob={}!".format(prob_death))
|
||||||
|
for i in self.iter_agents(agent_class=Rabbit):
|
||||||
|
if i.state_id == i.dead.id:
|
||||||
|
continue
|
||||||
|
if self.prob(prob_death):
|
||||||
|
self.info("I killed a rabbit: {}".format(i.id))
|
||||||
|
rabbits_alive -= 1
|
||||||
|
i.die()
|
||||||
|
self.debug("Rabbits alive: {}".format(rabbits_alive))
|
||||||
|
|
||||||
|
|
||||||
|
sim = Simulation(model=RabbitsImprovedEnv, max_time=100, seed="MySeed", iterations=1)
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
sim.run()
|
@ -1,21 +0,0 @@
|
|||||||
---
|
|
||||||
load_module: rabbit_agents
|
|
||||||
name: rabbits_example
|
|
||||||
max_time: 1000
|
|
||||||
interval: 1
|
|
||||||
seed: MySeed
|
|
||||||
agent_type: rabbit_agents.RabbitModel
|
|
||||||
environment_agents:
|
|
||||||
- agent_type: rabbit_agents.RandomAccident
|
|
||||||
environment_params:
|
|
||||||
prob_death: 0.001
|
|
||||||
default_state:
|
|
||||||
mating_prob: 0.1
|
|
||||||
topology:
|
|
||||||
nodes:
|
|
||||||
- id: 1
|
|
||||||
agent_type: rabbit_agents.Male
|
|
||||||
- id: 0
|
|
||||||
agent_type: rabbit_agents.Female
|
|
||||||
directed: true
|
|
||||||
links: []
|
|
161
examples/rabbits/rabbits_basic_sim.py
Normal file
@ -0,0 +1,161 @@
|
|||||||
|
from soil import FSM, state, default_state, BaseAgent, NetworkAgent, Environment, Simulation, report, parameters as params
|
||||||
|
from collections import Counter
|
||||||
|
import logging
|
||||||
|
import math
|
||||||
|
|
||||||
|
|
||||||
|
class RabbitEnv(Environment):
|
||||||
|
prob_death: params.probability = 1e-100
|
||||||
|
|
||||||
|
def init(self):
|
||||||
|
a1 = self.add_node(Male)
|
||||||
|
a2 = self.add_node(Female)
|
||||||
|
a1.add_edge(a2)
|
||||||
|
self.add_agent(RandomAccident)
|
||||||
|
|
||||||
|
@report
|
||||||
|
@property
|
||||||
|
def num_rabbits(self):
|
||||||
|
return self.count_agents(agent_class=Rabbit)
|
||||||
|
|
||||||
|
@report
|
||||||
|
@property
|
||||||
|
def num_males(self):
|
||||||
|
return self.count_agents(agent_class=Male)
|
||||||
|
|
||||||
|
@report
|
||||||
|
@property
|
||||||
|
def num_females(self):
|
||||||
|
return self.count_agents(agent_class=Female)
|
||||||
|
|
||||||
|
|
||||||
|
class Rabbit(NetworkAgent, FSM):
|
||||||
|
|
||||||
|
sexual_maturity = 30
|
||||||
|
life_expectancy = 300
|
||||||
|
|
||||||
|
@default_state
|
||||||
|
@state
|
||||||
|
def newborn(self):
|
||||||
|
self.info("I am a newborn.")
|
||||||
|
self.age = 0
|
||||||
|
self.offspring = 0
|
||||||
|
return self.youngling
|
||||||
|
|
||||||
|
@state
|
||||||
|
def youngling(self):
|
||||||
|
self.age += 1
|
||||||
|
if self.age >= self.sexual_maturity:
|
||||||
|
self.info(f"I am fertile! My age is {self.age}")
|
||||||
|
return self.fertile
|
||||||
|
|
||||||
|
@state
|
||||||
|
def fertile(self):
|
||||||
|
raise Exception("Each subclass should define its fertile state")
|
||||||
|
|
||||||
|
@state
|
||||||
|
def dead(self):
|
||||||
|
self.die()
|
||||||
|
|
||||||
|
|
||||||
|
class Male(Rabbit):
|
||||||
|
max_females = 5
|
||||||
|
mating_prob = 0.001
|
||||||
|
|
||||||
|
@state
|
||||||
|
def fertile(self):
|
||||||
|
self.age += 1
|
||||||
|
|
||||||
|
if self.age > self.life_expectancy:
|
||||||
|
return self.dead
|
||||||
|
|
||||||
|
# Males try to mate
|
||||||
|
for f in self.model.agents(
|
||||||
|
agent_class=Female, state_id=Female.fertile.id, limit=self.max_females
|
||||||
|
):
|
||||||
|
self.debug("FOUND A FEMALE: ", repr(f), self.mating_prob)
|
||||||
|
if self.prob(self["mating_prob"]):
|
||||||
|
f.impregnate(self)
|
||||||
|
break # Take a break
|
||||||
|
|
||||||
|
|
||||||
|
class Female(Rabbit):
|
||||||
|
gestation = 10
|
||||||
|
pregnancy = -1
|
||||||
|
|
||||||
|
@state
|
||||||
|
def fertile(self):
|
||||||
|
# Just wait for a Male
|
||||||
|
self.age += 1
|
||||||
|
if self.age > self.life_expectancy:
|
||||||
|
return self.dead
|
||||||
|
if self.pregnancy >= 0:
|
||||||
|
return self.pregnant
|
||||||
|
|
||||||
|
def impregnate(self, male):
|
||||||
|
self.info(f"impregnated by {repr(male)}")
|
||||||
|
self.mate = male
|
||||||
|
self.pregnancy = 0
|
||||||
|
self.number_of_babies = int(8 + 4 * self.random.random())
|
||||||
|
|
||||||
|
@state
|
||||||
|
def pregnant(self):
|
||||||
|
self.info("I am pregnant")
|
||||||
|
self.age += 1
|
||||||
|
|
||||||
|
if self.age >= self.life_expectancy:
|
||||||
|
return self.die()
|
||||||
|
|
||||||
|
if self.pregnancy < self.gestation:
|
||||||
|
self.pregnancy += 1
|
||||||
|
return
|
||||||
|
|
||||||
|
self.info("Having {} babies".format(self.number_of_babies))
|
||||||
|
for i in range(self.number_of_babies):
|
||||||
|
state = {}
|
||||||
|
agent_class = self.random.choice([Male, Female])
|
||||||
|
child = self.model.add_node(agent_class=agent_class, **state)
|
||||||
|
child.add_edge(self)
|
||||||
|
try:
|
||||||
|
child.add_edge(self.mate)
|
||||||
|
self.model.agents[self.mate].offspring += 1
|
||||||
|
except ValueError:
|
||||||
|
self.debug("The father has passed away")
|
||||||
|
|
||||||
|
self.offspring += 1
|
||||||
|
self.mate = None
|
||||||
|
self.pregnancy = -1
|
||||||
|
return self.fertile
|
||||||
|
|
||||||
|
def die(self):
|
||||||
|
if "pregnancy" in self and self["pregnancy"] > -1:
|
||||||
|
self.info("A mother has died carrying a baby!!")
|
||||||
|
return super().die()
|
||||||
|
|
||||||
|
|
||||||
|
class RandomAccident(BaseAgent):
|
||||||
|
def step(self):
|
||||||
|
rabbits_alive = self.model.G.number_of_nodes()
|
||||||
|
|
||||||
|
if not rabbits_alive:
|
||||||
|
return self.die()
|
||||||
|
|
||||||
|
prob_death = self.model.prob_death * math.floor(
|
||||||
|
math.log10(max(1, rabbits_alive))
|
||||||
|
)
|
||||||
|
self.debug("Killing some rabbits with prob={}!".format(prob_death))
|
||||||
|
for i in self.get_agents(agent_class=Rabbit):
|
||||||
|
if i.state_id == i.dead.id:
|
||||||
|
continue
|
||||||
|
if self.prob(prob_death):
|
||||||
|
self.info("I killed a rabbit: {}".format(i.id))
|
||||||
|
rabbits_alive -= 1
|
||||||
|
i.die()
|
||||||
|
self.debug("Rabbits alive: {}".format(rabbits_alive))
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
sim = Simulation(model=RabbitEnv, max_time=100, seed="MySeed", iterations=1)
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
sim.run()
|
@ -1,45 +0,0 @@
|
|||||||
'''
|
|
||||||
Example of setting a
|
|
||||||
Example of a fully programmatic simulation, without definition files.
|
|
||||||
'''
|
|
||||||
from soil import Simulation, agents
|
|
||||||
from soil.time import Delta
|
|
||||||
from random import expovariate
|
|
||||||
import logging
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
class MyAgent(agents.FSM):
|
|
||||||
'''
|
|
||||||
An agent that first does a ping
|
|
||||||
'''
|
|
||||||
|
|
||||||
defaults = {'pong_counts': 2}
|
|
||||||
|
|
||||||
@agents.default_state
|
|
||||||
@agents.state
|
|
||||||
def ping(self):
|
|
||||||
self.info('Ping')
|
|
||||||
return self.pong, Delta(expovariate(1/16))
|
|
||||||
|
|
||||||
@agents.state
|
|
||||||
def pong(self):
|
|
||||||
self.info('Pong')
|
|
||||||
self.pong_counts -= 1
|
|
||||||
self.info(str(self.pong_counts))
|
|
||||||
if self.pong_counts < 1:
|
|
||||||
return self.die()
|
|
||||||
return None, Delta(expovariate(1/16))
|
|
||||||
|
|
||||||
|
|
||||||
s = Simulation(name='Programmatic',
|
|
||||||
network_agents=[{'agent_type': MyAgent, 'id': 0}],
|
|
||||||
topology={'nodes': [{'id': 0}], 'links': []},
|
|
||||||
num_trials=1,
|
|
||||||
max_time=100,
|
|
||||||
agent_type=MyAgent,
|
|
||||||
dry_run=True)
|
|
||||||
|
|
||||||
|
|
||||||
logging.basicConfig(level=logging.INFO)
|
|
||||||
envs = s.run()
|
|
47
examples/random_delays/random_delays_sim.py
Normal file
@ -0,0 +1,47 @@
|
|||||||
|
"""
|
||||||
|
Example of setting a
|
||||||
|
Example of a fully programmatic simulation, without definition files.
|
||||||
|
"""
|
||||||
|
from soil import Simulation, agents, Environment
|
||||||
|
from soil.time import Delta
|
||||||
|
|
||||||
|
|
||||||
|
class MyAgent(agents.FSM):
|
||||||
|
"""
|
||||||
|
An agent that first does a ping
|
||||||
|
"""
|
||||||
|
|
||||||
|
defaults = {"pong_counts": 2}
|
||||||
|
|
||||||
|
@agents.default_state
|
||||||
|
@agents.state
|
||||||
|
def ping(self):
|
||||||
|
self.info("Ping")
|
||||||
|
return self.pong, Delta(self.random.expovariate(1 / 16))
|
||||||
|
|
||||||
|
@agents.state
|
||||||
|
def pong(self):
|
||||||
|
self.info("Pong")
|
||||||
|
self.pong_counts -= 1
|
||||||
|
self.info(str(self.pong_counts))
|
||||||
|
if self.pong_counts < 1:
|
||||||
|
return self.die()
|
||||||
|
return None, Delta(self.random.expovariate(1 / 16))
|
||||||
|
|
||||||
|
|
||||||
|
class RandomEnv(Environment):
|
||||||
|
|
||||||
|
def init(self):
|
||||||
|
self.add_agent(agent_class=MyAgent)
|
||||||
|
|
||||||
|
|
||||||
|
s = Simulation(
|
||||||
|
name="Programmatic",
|
||||||
|
model=RandomEnv,
|
||||||
|
iterations=1,
|
||||||
|
max_time=100,
|
||||||
|
dump=False,
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
envs = s.run()
|
@ -1,30 +0,0 @@
|
|||||||
---
|
|
||||||
sampler:
|
|
||||||
method: "SALib.sample.morris.sample"
|
|
||||||
N: 10
|
|
||||||
template:
|
|
||||||
group: simple
|
|
||||||
num_trials: 1
|
|
||||||
interval: 1
|
|
||||||
max_time: 2
|
|
||||||
seed: "CompleteSeed!"
|
|
||||||
dump: false
|
|
||||||
network_params:
|
|
||||||
generator: complete_graph
|
|
||||||
n: 10
|
|
||||||
network_agents:
|
|
||||||
- agent_type: CounterModel
|
|
||||||
weight: "{{ x1 }}"
|
|
||||||
state:
|
|
||||||
state_id: 0
|
|
||||||
- agent_type: AggregatedCounter
|
|
||||||
weight: "{{ 1 - x1 }}"
|
|
||||||
environment_params:
|
|
||||||
name: "{{ x3 }}"
|
|
||||||
skip_test: true
|
|
||||||
vars:
|
|
||||||
bounds:
|
|
||||||
x1: [0, 1]
|
|
||||||
x2: [1, 2]
|
|
||||||
fixed:
|
|
||||||
x3: ["a", "b", "c"]
|
|
@ -1,208 +0,0 @@
|
|||||||
import random
|
|
||||||
import networkx as nx
|
|
||||||
from soil.agents import Geo, NetworkAgent, FSM, state, default_state
|
|
||||||
from soil import Environment
|
|
||||||
|
|
||||||
|
|
||||||
class TerroristSpreadModel(FSM, Geo):
|
|
||||||
"""
|
|
||||||
Settings:
|
|
||||||
information_spread_intensity
|
|
||||||
|
|
||||||
terrorist_additional_influence
|
|
||||||
|
|
||||||
min_vulnerability (optional else zero)
|
|
||||||
|
|
||||||
max_vulnerability
|
|
||||||
|
|
||||||
prob_interaction
|
|
||||||
"""
|
|
||||||
|
|
||||||
def __init__(self, model=None, unique_id=0, state=()):
|
|
||||||
super().__init__(model=model, unique_id=unique_id, state=state)
|
|
||||||
|
|
||||||
self.information_spread_intensity = model.environment_params['information_spread_intensity']
|
|
||||||
self.terrorist_additional_influence = model.environment_params['terrorist_additional_influence']
|
|
||||||
self.prob_interaction = model.environment_params['prob_interaction']
|
|
||||||
|
|
||||||
if self['id'] == self.civilian.id: # Civilian
|
|
||||||
self.mean_belief = random.uniform(0.00, 0.5)
|
|
||||||
elif self['id'] == self.terrorist.id: # Terrorist
|
|
||||||
self.mean_belief = random.uniform(0.8, 1.00)
|
|
||||||
elif self['id'] == self.leader.id: # Leader
|
|
||||||
self.mean_belief = 1.00
|
|
||||||
else:
|
|
||||||
raise Exception('Invalid state id: {}'.format(self['id']))
|
|
||||||
|
|
||||||
if 'min_vulnerability' in model.environment_params:
|
|
||||||
self.vulnerability = random.uniform( model.environment_params['min_vulnerability'], model.environment_params['max_vulnerability'] )
|
|
||||||
else :
|
|
||||||
self.vulnerability = random.uniform( 0, model.environment_params['max_vulnerability'] )
|
|
||||||
|
|
||||||
|
|
||||||
@state
|
|
||||||
def civilian(self):
|
|
||||||
neighbours = list(self.get_neighboring_agents(agent_type=TerroristSpreadModel))
|
|
||||||
if len(neighbours) > 0:
|
|
||||||
# Only interact with some of the neighbors
|
|
||||||
interactions = list(n for n in neighbours if random.random() <= self.prob_interaction)
|
|
||||||
influence = sum( self.degree(i) for i in interactions )
|
|
||||||
mean_belief = sum( i.mean_belief * self.degree(i) / influence for i in interactions )
|
|
||||||
mean_belief = mean_belief * self.information_spread_intensity + self.mean_belief * ( 1 - self.information_spread_intensity )
|
|
||||||
self.mean_belief = mean_belief * self.vulnerability + self.mean_belief * ( 1 - self.vulnerability )
|
|
||||||
|
|
||||||
if self.mean_belief >= 0.8:
|
|
||||||
return self.terrorist
|
|
||||||
|
|
||||||
@state
|
|
||||||
def leader(self):
|
|
||||||
self.mean_belief = self.mean_belief ** ( 1 - self.terrorist_additional_influence )
|
|
||||||
for neighbour in self.get_neighboring_agents(state_id=[self.terrorist.id, self.leader.id]):
|
|
||||||
if self.betweenness(neighbour) > self.betweenness(self):
|
|
||||||
return self.terrorist
|
|
||||||
|
|
||||||
@state
|
|
||||||
def terrorist(self):
|
|
||||||
neighbours = self.get_agents(state_id=[self.terrorist.id, self.leader.id],
|
|
||||||
agent_type=TerroristSpreadModel,
|
|
||||||
limit_neighbors=True)
|
|
||||||
if len(neighbours) > 0:
|
|
||||||
influence = sum( self.degree(n) for n in neighbours )
|
|
||||||
mean_belief = sum( n.mean_belief * self.degree(n) / influence for n in neighbours )
|
|
||||||
mean_belief = mean_belief * self.vulnerability + self.mean_belief * ( 1 - self.vulnerability )
|
|
||||||
self.mean_belief = self.mean_belief ** ( 1 - self.terrorist_additional_influence )
|
|
||||||
|
|
||||||
# Check if there are any leaders in the group
|
|
||||||
leaders = list(filter(lambda x: x.state.id == self.leader.id, neighbours))
|
|
||||||
if not leaders:
|
|
||||||
# Check if this is the potential leader
|
|
||||||
# Stop once it's found. Otherwise, set self as leader
|
|
||||||
for neighbour in neighbours:
|
|
||||||
if self.betweenness(self) < self.betweenness(neighbour):
|
|
||||||
return
|
|
||||||
return self.leader
|
|
||||||
|
|
||||||
|
|
||||||
class TrainingAreaModel(FSM, Geo):
|
|
||||||
"""
|
|
||||||
Settings:
|
|
||||||
training_influence
|
|
||||||
|
|
||||||
min_vulnerability
|
|
||||||
|
|
||||||
Requires TerroristSpreadModel.
|
|
||||||
"""
|
|
||||||
|
|
||||||
def __init__(self, model=None, unique_id=0, state=()):
|
|
||||||
super().__init__(model=model, unique_id=unique_id, state=state)
|
|
||||||
self.training_influence = model.environment_params['training_influence']
|
|
||||||
if 'min_vulnerability' in model.environment_params:
|
|
||||||
self.min_vulnerability = model.environment_params['min_vulnerability']
|
|
||||||
else: self.min_vulnerability = 0
|
|
||||||
|
|
||||||
@default_state
|
|
||||||
@state
|
|
||||||
def terrorist(self):
|
|
||||||
for neighbour in self.get_neighboring_agents(agent_type=TerroristSpreadModel):
|
|
||||||
if neighbour.vulnerability > self.min_vulnerability:
|
|
||||||
neighbour.vulnerability = neighbour.vulnerability ** ( 1 - self.training_influence )
|
|
||||||
|
|
||||||
|
|
||||||
class HavenModel(FSM, Geo):
|
|
||||||
"""
|
|
||||||
Settings:
|
|
||||||
haven_influence
|
|
||||||
|
|
||||||
min_vulnerability
|
|
||||||
|
|
||||||
max_vulnerability
|
|
||||||
|
|
||||||
Requires TerroristSpreadModel.
|
|
||||||
"""
|
|
||||||
|
|
||||||
def __init__(self, model=None, unique_id=0, state=()):
|
|
||||||
super().__init__(model=model, unique_id=unique_id, state=state)
|
|
||||||
self.haven_influence = model.environment_params['haven_influence']
|
|
||||||
if 'min_vulnerability' in model.environment_params:
|
|
||||||
self.min_vulnerability = model.environment_params['min_vulnerability']
|
|
||||||
else: self.min_vulnerability = 0
|
|
||||||
self.max_vulnerability = model.environment_params['max_vulnerability']
|
|
||||||
|
|
||||||
def get_occupants(self, **kwargs):
|
|
||||||
return self.get_neighboring_agents(agent_type=TerroristSpreadModel, **kwargs)
|
|
||||||
|
|
||||||
@state
|
|
||||||
def civilian(self):
|
|
||||||
civilians = self.get_occupants(state_id=self.civilian.id)
|
|
||||||
if not civilians:
|
|
||||||
return self.terrorist
|
|
||||||
|
|
||||||
for neighbour in self.get_occupants():
|
|
||||||
if neighbour.vulnerability > self.min_vulnerability:
|
|
||||||
neighbour.vulnerability = neighbour.vulnerability * ( 1 - self.haven_influence )
|
|
||||||
return self.civilian
|
|
||||||
|
|
||||||
@state
|
|
||||||
def terrorist(self):
|
|
||||||
for neighbour in self.get_occupants():
|
|
||||||
if neighbour.vulnerability < self.max_vulnerability:
|
|
||||||
neighbour.vulnerability = neighbour.vulnerability ** ( 1 - self.haven_influence )
|
|
||||||
return self.terrorist
|
|
||||||
|
|
||||||
|
|
||||||
class TerroristNetworkModel(TerroristSpreadModel):
|
|
||||||
"""
|
|
||||||
Settings:
|
|
||||||
sphere_influence
|
|
||||||
|
|
||||||
vision_range
|
|
||||||
|
|
||||||
weight_social_distance
|
|
||||||
|
|
||||||
weight_link_distance
|
|
||||||
"""
|
|
||||||
|
|
||||||
def __init__(self, model=None, unique_id=0, state=()):
|
|
||||||
super().__init__(model=model, unique_id=unique_id, state=state)
|
|
||||||
|
|
||||||
self.vision_range = model.environment_params['vision_range']
|
|
||||||
self.sphere_influence = model.environment_params['sphere_influence']
|
|
||||||
self.weight_social_distance = model.environment_params['weight_social_distance']
|
|
||||||
self.weight_link_distance = model.environment_params['weight_link_distance']
|
|
||||||
|
|
||||||
@state
|
|
||||||
def terrorist(self):
|
|
||||||
self.update_relationships()
|
|
||||||
return super().terrorist()
|
|
||||||
|
|
||||||
@state
|
|
||||||
def leader(self):
|
|
||||||
self.update_relationships()
|
|
||||||
return super().leader()
|
|
||||||
|
|
||||||
def update_relationships(self):
|
|
||||||
if self.count_neighboring_agents(state_id=self.civilian.id) == 0:
|
|
||||||
close_ups = set(self.geo_search(radius=self.vision_range, agent_type=TerroristNetworkModel))
|
|
||||||
step_neighbours = set(self.ego_search(self.sphere_influence, agent_type=TerroristNetworkModel, center=False))
|
|
||||||
neighbours = set(agent.id for agent in self.get_neighboring_agents(agent_type=TerroristNetworkModel))
|
|
||||||
search = (close_ups | step_neighbours) - neighbours
|
|
||||||
for agent in self.get_agents(search):
|
|
||||||
social_distance = 1 / self.shortest_path_length(agent.id)
|
|
||||||
spatial_proximity = ( 1 - self.get_distance(agent.id) )
|
|
||||||
prob_new_interaction = self.weight_social_distance * social_distance + self.weight_link_distance * spatial_proximity
|
|
||||||
if agent['id'] == agent.civilian.id and random.random() < prob_new_interaction:
|
|
||||||
self.add_edge(agent)
|
|
||||||
break
|
|
||||||
|
|
||||||
def get_distance(self, target):
|
|
||||||
source_x, source_y = nx.get_node_attributes(self.topology, 'pos')[self.id]
|
|
||||||
target_x, target_y = nx.get_node_attributes(self.topology, 'pos')[target]
|
|
||||||
dx = abs( source_x - target_x )
|
|
||||||
dy = abs( source_y - target_y )
|
|
||||||
return ( dx ** 2 + dy ** 2 ) ** ( 1 / 2 )
|
|
||||||
|
|
||||||
def shortest_path_length(self, target):
|
|
||||||
try:
|
|
||||||
return nx.shortest_path_length(self.topology, self.id, target)
|
|
||||||
except nx.NetworkXNoPath:
|
|
||||||
return float('inf')
|
|
@ -1,63 +0,0 @@
|
|||||||
name: TerroristNetworkModel_sim
|
|
||||||
load_module: TerroristNetworkModel
|
|
||||||
max_time: 150
|
|
||||||
num_trials: 1
|
|
||||||
network_params:
|
|
||||||
generator: random_geometric_graph
|
|
||||||
radius: 0.2
|
|
||||||
# generator: geographical_threshold_graph
|
|
||||||
# theta: 20
|
|
||||||
n: 100
|
|
||||||
network_agents:
|
|
||||||
- agent_type: TerroristNetworkModel
|
|
||||||
weight: 0.8
|
|
||||||
state:
|
|
||||||
id: civilian # Civilians
|
|
||||||
- agent_type: TerroristNetworkModel
|
|
||||||
weight: 0.1
|
|
||||||
state:
|
|
||||||
id: leader # Leaders
|
|
||||||
- agent_type: TrainingAreaModel
|
|
||||||
weight: 0.05
|
|
||||||
state:
|
|
||||||
id: terrorist # Terrorism
|
|
||||||
- agent_type: HavenModel
|
|
||||||
weight: 0.05
|
|
||||||
state:
|
|
||||||
id: civilian # Civilian
|
|
||||||
|
|
||||||
environment_params:
|
|
||||||
# TerroristSpreadModel
|
|
||||||
information_spread_intensity: 0.7
|
|
||||||
terrorist_additional_influence: 0.035
|
|
||||||
max_vulnerability: 0.7
|
|
||||||
prob_interaction: 0.5
|
|
||||||
|
|
||||||
# TrainingAreaModel and HavenModel
|
|
||||||
training_influence: 0.20
|
|
||||||
haven_influence: 0.20
|
|
||||||
|
|
||||||
# TerroristNetworkModel
|
|
||||||
vision_range: 0.30
|
|
||||||
sphere_influence: 2
|
|
||||||
weight_social_distance: 0.035
|
|
||||||
weight_link_distance: 0.035
|
|
||||||
|
|
||||||
visualization_params:
|
|
||||||
# Icons downloaded from https://www.iconfinder.com/
|
|
||||||
shape_property: agent
|
|
||||||
shapes:
|
|
||||||
TrainingAreaModel: target
|
|
||||||
HavenModel: home
|
|
||||||
TerroristNetworkModel: person
|
|
||||||
colors:
|
|
||||||
- attr_id: civilian
|
|
||||||
color: '#40de40'
|
|
||||||
- attr_id: terrorist
|
|
||||||
color: red
|
|
||||||
- attr_id: leader
|
|
||||||
color: '#c16a6a'
|
|
||||||
background_image: 'map_4800x2860.jpg'
|
|
||||||
background_opacity: '0.9'
|
|
||||||
background_filter_color: 'blue'
|
|
||||||
skip_test: true # This simulation takes too long for automated tests.
|
|
341
examples/terrorism/TerroristNetworkModel_sim.py
Normal file
@ -0,0 +1,341 @@
|
|||||||
|
import networkx as nx
|
||||||
|
from soil.agents import Geo, NetworkAgent, FSM, custom, state, default_state
|
||||||
|
from soil import Environment, Simulation
|
||||||
|
from soil.parameters import *
|
||||||
|
from soil.utils import int_seed
|
||||||
|
|
||||||
|
|
||||||
|
class TerroristEnvironment(Environment):
|
||||||
|
n: Integer = 100
|
||||||
|
radius: Float = 0.2
|
||||||
|
|
||||||
|
information_spread_intensity: probability = 0.7
|
||||||
|
terrorist_additional_influence: probability = 0.03
|
||||||
|
terrorist_additional_influence: probability = 0.035
|
||||||
|
max_vulnerability: probability = 0.7
|
||||||
|
prob_interaction: probability = 0.5
|
||||||
|
|
||||||
|
# TrainingAreaModel and HavenModel
|
||||||
|
training_influence: probability = 0.20
|
||||||
|
haven_influence: probability = 0.20
|
||||||
|
|
||||||
|
# TerroristNetworkModel
|
||||||
|
vision_range: Float = 0.30
|
||||||
|
sphere_influence: Integer = 2
|
||||||
|
weight_social_distance: Float = 0.035
|
||||||
|
weight_link_distance: Float = 0.035
|
||||||
|
|
||||||
|
ratio_civil: probability = 0.8
|
||||||
|
ratio_leader: probability = 0.1
|
||||||
|
ratio_training: probability = 0.05
|
||||||
|
ratio_haven: probability = 0.05
|
||||||
|
|
||||||
|
def init(self):
|
||||||
|
self.create_network(generator=self.generator, n=self.n, radius=self.radius)
|
||||||
|
self.populate_network([
|
||||||
|
TerroristNetworkModel.w(state_id='civilian'),
|
||||||
|
TerroristNetworkModel.w(state_id='leader'),
|
||||||
|
TrainingAreaModel,
|
||||||
|
HavenModel
|
||||||
|
], [self.ratio_civil, self.ratio_leader, self.ratio_training, self.ratio_haven])
|
||||||
|
|
||||||
|
def generator(self, *args, **kwargs):
|
||||||
|
return nx.random_geometric_graph(*args, **kwargs, seed=int_seed(self._seed))
|
||||||
|
|
||||||
|
class TerroristSpreadModel(FSM, Geo):
|
||||||
|
"""
|
||||||
|
Settings:
|
||||||
|
information_spread_intensity
|
||||||
|
|
||||||
|
terrorist_additional_influence
|
||||||
|
|
||||||
|
min_vulnerability (optional else zero)
|
||||||
|
|
||||||
|
max_vulnerability
|
||||||
|
"""
|
||||||
|
|
||||||
|
information_spread_intensity = 0.1
|
||||||
|
terrorist_additional_influence = 0.1
|
||||||
|
min_vulnerability = 0
|
||||||
|
max_vulnerability = 1
|
||||||
|
|
||||||
|
def init(self):
|
||||||
|
if self.state_id == self.civilian.id: # Civilian
|
||||||
|
self.mean_belief = self.model.random.uniform(0.00, 0.5)
|
||||||
|
elif self.state_id == self.terrorist.id: # Terrorist
|
||||||
|
self.mean_belief = self.random.uniform(0.8, 1.00)
|
||||||
|
elif self.state_id == self.leader.id: # Leader
|
||||||
|
self.mean_belief = 1.00
|
||||||
|
else:
|
||||||
|
raise Exception("Invalid state id: {}".format(self["id"]))
|
||||||
|
|
||||||
|
self.vulnerability = self.random.uniform(
|
||||||
|
self.get("min_vulnerability", 0), self.get("max_vulnerability", 1)
|
||||||
|
)
|
||||||
|
|
||||||
|
@default_state
|
||||||
|
@state
|
||||||
|
def civilian(self):
|
||||||
|
neighbours = list(self.get_neighbors(agent_class=TerroristSpreadModel))
|
||||||
|
if len(neighbours) > 0:
|
||||||
|
# Only interact with some of the neighbors
|
||||||
|
interactions = list(
|
||||||
|
n for n in neighbours if self.random.random() <= self.model.prob_interaction
|
||||||
|
)
|
||||||
|
influence = sum(self.degree(i) for i in interactions)
|
||||||
|
mean_belief = sum(
|
||||||
|
i.mean_belief * self.degree(i) / influence for i in interactions
|
||||||
|
)
|
||||||
|
mean_belief = (
|
||||||
|
mean_belief * self.information_spread_intensity
|
||||||
|
+ self.mean_belief * (1 - self.information_spread_intensity)
|
||||||
|
)
|
||||||
|
self.mean_belief = mean_belief * self.vulnerability + self.mean_belief * (
|
||||||
|
1 - self.vulnerability
|
||||||
|
)
|
||||||
|
|
||||||
|
if self.mean_belief >= 0.8:
|
||||||
|
return self.terrorist
|
||||||
|
|
||||||
|
@state
|
||||||
|
def leader(self):
|
||||||
|
self.mean_belief = self.mean_belief ** (1 - self.terrorist_additional_influence)
|
||||||
|
for neighbour in self.get_neighbors(
|
||||||
|
state_id=[self.terrorist.id, self.leader.id]
|
||||||
|
):
|
||||||
|
if self.betweenness(neighbour) > self.betweenness(self):
|
||||||
|
return self.terrorist
|
||||||
|
|
||||||
|
@state
|
||||||
|
def terrorist(self):
|
||||||
|
neighbours = self.get_agents(
|
||||||
|
state_id=[self.terrorist.id, self.leader.id],
|
||||||
|
agent_class=TerroristSpreadModel,
|
||||||
|
limit_neighbors=True,
|
||||||
|
)
|
||||||
|
if len(neighbours) > 0:
|
||||||
|
influence = sum(self.degree(n) for n in neighbours)
|
||||||
|
mean_belief = sum(
|
||||||
|
n.mean_belief * self.degree(n) / influence for n in neighbours
|
||||||
|
)
|
||||||
|
mean_belief = mean_belief * self.vulnerability + self.mean_belief * (
|
||||||
|
1 - self.vulnerability
|
||||||
|
)
|
||||||
|
self.mean_belief = self.mean_belief ** (
|
||||||
|
1 - self.terrorist_additional_influence
|
||||||
|
)
|
||||||
|
|
||||||
|
# Check if there are any leaders in the group
|
||||||
|
leaders = list(filter(lambda x: x.state_id == self.leader.id, neighbours))
|
||||||
|
if not leaders:
|
||||||
|
# Check if this is the potential leader
|
||||||
|
# Stop once it's found. Otherwise, set self as leader
|
||||||
|
for neighbour in neighbours:
|
||||||
|
if self.betweenness(self) < self.betweenness(neighbour):
|
||||||
|
return
|
||||||
|
return self.leader
|
||||||
|
|
||||||
|
def ego_search(self, steps=1, center=False, agent=None, **kwargs):
|
||||||
|
"""Get a list of nodes in the ego network of *node* of radius *steps*"""
|
||||||
|
node = agent.node_id if agent else self.node_id
|
||||||
|
G = self.subgraph(**kwargs)
|
||||||
|
return nx.ego_graph(G, node, center=center, radius=steps).nodes()
|
||||||
|
|
||||||
|
def degree(self, agent, force=False):
|
||||||
|
if (
|
||||||
|
force
|
||||||
|
or (not hasattr(self.model, "_degree"))
|
||||||
|
or getattr(self.model, "_last_step", 0) < self.now
|
||||||
|
):
|
||||||
|
self.model._degree = nx.degree_centrality(self.G)
|
||||||
|
self.model._last_step = self.now
|
||||||
|
return self.model._degree[agent.node_id]
|
||||||
|
|
||||||
|
def betweenness(self, agent, force=False):
|
||||||
|
if (
|
||||||
|
force
|
||||||
|
or (not hasattr(self.model, "_betweenness"))
|
||||||
|
or getattr(self.model, "_last_step", 0) < self.now
|
||||||
|
):
|
||||||
|
self.model._betweenness = nx.betweenness_centrality(self.G)
|
||||||
|
self.model._last_step = self.now
|
||||||
|
return self.model._betweenness[agent.node_id]
|
||||||
|
|
||||||
|
|
||||||
|
class TrainingAreaModel(FSM, Geo):
|
||||||
|
"""
|
||||||
|
Settings:
|
||||||
|
training_influence
|
||||||
|
|
||||||
|
min_vulnerability
|
||||||
|
|
||||||
|
Requires TerroristSpreadModel.
|
||||||
|
"""
|
||||||
|
|
||||||
|
training_influence = 0.1
|
||||||
|
min_vulnerability = 0
|
||||||
|
|
||||||
|
def init(self):
|
||||||
|
self.mean_believe = 1
|
||||||
|
self.vulnerability = 0
|
||||||
|
|
||||||
|
@default_state
|
||||||
|
@state
|
||||||
|
def terrorist(self):
|
||||||
|
for neighbour in self.get_neighbors(agent_class=TerroristSpreadModel):
|
||||||
|
if neighbour.vulnerability > self.min_vulnerability:
|
||||||
|
neighbour.vulnerability = neighbour.vulnerability ** (
|
||||||
|
1 - self.training_influence
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
class HavenModel(FSM, Geo):
|
||||||
|
"""
|
||||||
|
Settings:
|
||||||
|
haven_influence
|
||||||
|
|
||||||
|
min_vulnerability
|
||||||
|
|
||||||
|
max_vulnerability
|
||||||
|
|
||||||
|
Requires TerroristSpreadModel.
|
||||||
|
"""
|
||||||
|
|
||||||
|
min_vulnerability = 0
|
||||||
|
haven_influence = 0.1
|
||||||
|
max_vulnerability = 0.5
|
||||||
|
|
||||||
|
def init(self):
|
||||||
|
self.mean_believe = 0
|
||||||
|
self.vulnerability = 0
|
||||||
|
|
||||||
|
def get_occupants(self, **kwargs):
|
||||||
|
return self.get_neighbors(agent_class=TerroristSpreadModel,
|
||||||
|
**kwargs)
|
||||||
|
|
||||||
|
@default_state
|
||||||
|
@state
|
||||||
|
def civilian(self):
|
||||||
|
civilians = self.get_occupants(state_id=self.civilian.id)
|
||||||
|
if not civilians:
|
||||||
|
return self.terrorist
|
||||||
|
|
||||||
|
for neighbour in self.get_occupants():
|
||||||
|
if neighbour.vulnerability > self.min_vulnerability:
|
||||||
|
neighbour.vulnerability = neighbour.vulnerability * (
|
||||||
|
1 - self.haven_influence
|
||||||
|
)
|
||||||
|
return self.civilian
|
||||||
|
|
||||||
|
@state
|
||||||
|
def terrorist(self):
|
||||||
|
for neighbour in self.get_occupants():
|
||||||
|
if neighbour.vulnerability < self.max_vulnerability:
|
||||||
|
neighbour.vulnerability = neighbour.vulnerability ** (
|
||||||
|
1 - self.haven_influence
|
||||||
|
)
|
||||||
|
return self.terrorist
|
||||||
|
|
||||||
|
|
||||||
|
class TerroristNetworkModel(TerroristSpreadModel):
|
||||||
|
"""
|
||||||
|
Settings:
|
||||||
|
sphere_influence
|
||||||
|
|
||||||
|
vision_range
|
||||||
|
|
||||||
|
weight_social_distance
|
||||||
|
|
||||||
|
weight_link_distance
|
||||||
|
"""
|
||||||
|
|
||||||
|
sphere_influence: float = 1
|
||||||
|
vision_range: float = 1
|
||||||
|
weight_social_distance: float = 0.5
|
||||||
|
weight_link_distance: float = 0.2
|
||||||
|
|
||||||
|
@state
|
||||||
|
def terrorist(self):
|
||||||
|
self.update_relationships()
|
||||||
|
return super().terrorist()
|
||||||
|
|
||||||
|
@state
|
||||||
|
def leader(self):
|
||||||
|
self.update_relationships()
|
||||||
|
return super().leader()
|
||||||
|
|
||||||
|
def update_relationships(self):
|
||||||
|
if self.count_neighbors(state_id=self.civilian.id) == 0:
|
||||||
|
close_ups = set(
|
||||||
|
self.geo_search(
|
||||||
|
radius=self.vision_range, agent_class=TerroristNetworkModel
|
||||||
|
)
|
||||||
|
)
|
||||||
|
step_neighbours = set(
|
||||||
|
self.ego_search(
|
||||||
|
self.sphere_influence,
|
||||||
|
agent_class=TerroristNetworkModel,
|
||||||
|
center=False,
|
||||||
|
)
|
||||||
|
)
|
||||||
|
neighbours = set(
|
||||||
|
agent.unique_id
|
||||||
|
for agent in self.get_neighbors(agent_class=TerroristNetworkModel)
|
||||||
|
)
|
||||||
|
search = (close_ups | step_neighbours) - neighbours
|
||||||
|
for agent in self.get_agents(search):
|
||||||
|
social_distance = 1 / self.shortest_path_length(agent.unique_id)
|
||||||
|
spatial_proximity = 1 - self.get_distance(agent.unique_id)
|
||||||
|
prob_new_interaction = (
|
||||||
|
self.weight_social_distance * social_distance
|
||||||
|
+ self.weight_link_distance * spatial_proximity
|
||||||
|
)
|
||||||
|
if (
|
||||||
|
agent.state_id == "civilian"
|
||||||
|
and self.random.random() < prob_new_interaction
|
||||||
|
):
|
||||||
|
self.add_edge(agent)
|
||||||
|
break
|
||||||
|
|
||||||
|
def get_distance(self, target):
|
||||||
|
source_x, source_y = nx.get_node_attributes(self.G, "pos")[self.unique_id]
|
||||||
|
target_x, target_y = nx.get_node_attributes(self.G, "pos")[target]
|
||||||
|
dx = abs(source_x - target_x)
|
||||||
|
dy = abs(source_y - target_y)
|
||||||
|
return (dx**2 + dy**2) ** (1 / 2)
|
||||||
|
|
||||||
|
def shortest_path_length(self, target):
|
||||||
|
try:
|
||||||
|
return nx.shortest_path_length(self.G, self.unique_id, target)
|
||||||
|
except nx.NetworkXNoPath:
|
||||||
|
return float("inf")
|
||||||
|
|
||||||
|
|
||||||
|
sim = Simulation(
|
||||||
|
model=TerroristEnvironment,
|
||||||
|
iterations=1,
|
||||||
|
name="TerroristNetworkModel_sim",
|
||||||
|
max_steps=150,
|
||||||
|
seed="default2",
|
||||||
|
skip_test=False,
|
||||||
|
dump=False,
|
||||||
|
)
|
||||||
|
|
||||||
|
# TODO: integrate visualization
|
||||||
|
# visualization_params:
|
||||||
|
# # Icons downloaded from https://www.iconfinder.com/
|
||||||
|
# shape_property: agent
|
||||||
|
# shapes:
|
||||||
|
# TrainingAreaModel: target
|
||||||
|
# HavenModel: home
|
||||||
|
# TerroristNetworkModel: person
|
||||||
|
# colors:
|
||||||
|
# - attr_id: civilian
|
||||||
|
# color: '#40de40'
|
||||||
|
# - attr_id: terrorist
|
||||||
|
# color: red
|
||||||
|
# - attr_id: leader
|
||||||
|
# color: '#c16a6a'
|
||||||
|
# background_image: 'map_4800x2860.jpg'
|
||||||
|
# background_opacity: '0.9'
|
||||||
|
# background_filter_color: 'blue'
|
@ -1,14 +0,0 @@
|
|||||||
---
|
|
||||||
name: torvalds_example
|
|
||||||
max_time: 10
|
|
||||||
interval: 2
|
|
||||||
agent_type: CounterModel
|
|
||||||
default_state:
|
|
||||||
skill_level: 'beginner'
|
|
||||||
network_params:
|
|
||||||
path: 'torvalds.edgelist'
|
|
||||||
states:
|
|
||||||
Torvalds:
|
|
||||||
skill_level: 'God'
|
|
||||||
balkian:
|
|
||||||
skill_level: 'developer'
|
|
25
examples/torvalds_sim.py
Normal file
@ -0,0 +1,25 @@
|
|||||||
|
from soil import Environment, Simulation, CounterModel, report
|
||||||
|
|
||||||
|
|
||||||
|
# Get directory path for current file
|
||||||
|
import os, sys, inspect
|
||||||
|
currentdir = os.path.dirname(os.path.abspath(inspect.getfile(inspect.currentframe())))
|
||||||
|
|
||||||
|
class TorvaldsEnv(Environment):
|
||||||
|
|
||||||
|
def init(self):
|
||||||
|
self.create_network(path=os.path.join(currentdir, 'torvalds.edgelist'))
|
||||||
|
self.populate_network(CounterModel, skill_level='beginner')
|
||||||
|
self.agent(node_id="Torvalds").skill_level = 'God'
|
||||||
|
self.agent(node_id="balkian").skill_level = 'developer'
|
||||||
|
self.add_agent_reporter("times")
|
||||||
|
|
||||||
|
@report
|
||||||
|
def god_developers(self):
|
||||||
|
return self.count_agents(skill_level='God')
|
||||||
|
|
||||||
|
|
||||||
|
sim = Simulation(name='torvalds_example',
|
||||||
|
max_steps=10,
|
||||||
|
interval=2,
|
||||||
|
model=TorvaldsEnv)
|
@ -12330,11 +12330,11 @@ Notice how node 0 is the only one with a TV.</p>
|
|||||||
<span class="n">sim</span> <span class="o">=</span> <span class="n">soil</span><span class="o">.</span><span class="n">Simulation</span><span class="p">(</span><span class="n">topology</span><span class="o">=</span><span class="n">G</span><span class="p">,</span>
|
<span class="n">sim</span> <span class="o">=</span> <span class="n">soil</span><span class="o">.</span><span class="n">Simulation</span><span class="p">(</span><span class="n">topology</span><span class="o">=</span><span class="n">G</span><span class="p">,</span>
|
||||||
<span class="n">num_trials</span><span class="o">=</span><span class="mi">1</span><span class="p">,</span>
|
<span class="n">num_trials</span><span class="o">=</span><span class="mi">1</span><span class="p">,</span>
|
||||||
<span class="n">max_time</span><span class="o">=</span><span class="n">MAX_TIME</span><span class="p">,</span>
|
<span class="n">max_time</span><span class="o">=</span><span class="n">MAX_TIME</span><span class="p">,</span>
|
||||||
<span class="n">environment_agents</span><span class="o">=</span><span class="p">[{</span><span class="s1">'agent_type'</span><span class="p">:</span> <span class="n">NewsEnvironmentAgent</span><span class="p">,</span>
|
<span class="n">environment_agents</span><span class="o">=</span><span class="p">[{</span><span class="s1">'agent_class'</span><span class="p">:</span> <span class="n">NewsEnvironmentAgent</span><span class="p">,</span>
|
||||||
<span class="s1">'state'</span><span class="p">:</span> <span class="p">{</span>
|
<span class="s1">'state'</span><span class="p">:</span> <span class="p">{</span>
|
||||||
<span class="s1">'event_time'</span><span class="p">:</span> <span class="n">EVENT_TIME</span>
|
<span class="s1">'event_time'</span><span class="p">:</span> <span class="n">EVENT_TIME</span>
|
||||||
<span class="p">}}],</span>
|
<span class="p">}}],</span>
|
||||||
<span class="n">network_agents</span><span class="o">=</span><span class="p">[{</span><span class="s1">'agent_type'</span><span class="p">:</span> <span class="n">NewsSpread</span><span class="p">,</span>
|
<span class="n">network_agents</span><span class="o">=</span><span class="p">[{</span><span class="s1">'agent_class'</span><span class="p">:</span> <span class="n">NewsSpread</span><span class="p">,</span>
|
||||||
<span class="s1">'weight'</span><span class="p">:</span> <span class="mi">1</span><span class="p">}],</span>
|
<span class="s1">'weight'</span><span class="p">:</span> <span class="mi">1</span><span class="p">}],</span>
|
||||||
<span class="n">states</span><span class="o">=</span><span class="p">{</span><span class="mi">0</span><span class="p">:</span> <span class="p">{</span><span class="s1">'has_tv'</span><span class="p">:</span> <span class="kc">True</span><span class="p">}},</span>
|
<span class="n">states</span><span class="o">=</span><span class="p">{</span><span class="mi">0</span><span class="p">:</span> <span class="p">{</span><span class="s1">'has_tv'</span><span class="p">:</span> <span class="kc">True</span><span class="p">}},</span>
|
||||||
<span class="n">default_state</span><span class="o">=</span><span class="p">{</span><span class="s1">'has_tv'</span><span class="p">:</span> <span class="kc">False</span><span class="p">},</span>
|
<span class="n">default_state</span><span class="o">=</span><span class="p">{</span><span class="s1">'has_tv'</span><span class="p">:</span> <span class="kc">False</span><span class="p">},</span>
|
||||||
@ -12468,14 +12468,14 @@ For this demo, we will use a python dictionary:</p>
|
|||||||
<span class="p">},</span>
|
<span class="p">},</span>
|
||||||
<span class="s1">'network_agents'</span><span class="p">:</span> <span class="p">[</span>
|
<span class="s1">'network_agents'</span><span class="p">:</span> <span class="p">[</span>
|
||||||
<span class="p">{</span>
|
<span class="p">{</span>
|
||||||
<span class="s1">'agent_type'</span><span class="p">:</span> <span class="n">NewsSpread</span><span class="p">,</span>
|
<span class="s1">'agent_class'</span><span class="p">:</span> <span class="n">NewsSpread</span><span class="p">,</span>
|
||||||
<span class="s1">'weight'</span><span class="p">:</span> <span class="mi">1</span><span class="p">,</span>
|
<span class="s1">'weight'</span><span class="p">:</span> <span class="mi">1</span><span class="p">,</span>
|
||||||
<span class="s1">'state'</span><span class="p">:</span> <span class="p">{</span>
|
<span class="s1">'state'</span><span class="p">:</span> <span class="p">{</span>
|
||||||
<span class="s1">'has_tv'</span><span class="p">:</span> <span class="kc">False</span>
|
<span class="s1">'has_tv'</span><span class="p">:</span> <span class="kc">False</span>
|
||||||
<span class="p">}</span>
|
<span class="p">}</span>
|
||||||
<span class="p">},</span>
|
<span class="p">},</span>
|
||||||
<span class="p">{</span>
|
<span class="p">{</span>
|
||||||
<span class="s1">'agent_type'</span><span class="p">:</span> <span class="n">NewsSpread</span><span class="p">,</span>
|
<span class="s1">'agent_class'</span><span class="p">:</span> <span class="n">NewsSpread</span><span class="p">,</span>
|
||||||
<span class="s1">'weight'</span><span class="p">:</span> <span class="mi">2</span><span class="p">,</span>
|
<span class="s1">'weight'</span><span class="p">:</span> <span class="mi">2</span><span class="p">,</span>
|
||||||
<span class="s1">'state'</span><span class="p">:</span> <span class="p">{</span>
|
<span class="s1">'state'</span><span class="p">:</span> <span class="p">{</span>
|
||||||
<span class="s1">'has_tv'</span><span class="p">:</span> <span class="kc">True</span>
|
<span class="s1">'has_tv'</span><span class="p">:</span> <span class="kc">True</span>
|
||||||
@ -12483,7 +12483,7 @@ For this demo, we will use a python dictionary:</p>
|
|||||||
<span class="p">}</span>
|
<span class="p">}</span>
|
||||||
<span class="p">],</span>
|
<span class="p">],</span>
|
||||||
<span class="s1">'environment_agents'</span><span class="p">:[</span>
|
<span class="s1">'environment_agents'</span><span class="p">:[</span>
|
||||||
<span class="p">{</span><span class="s1">'agent_type'</span><span class="p">:</span> <span class="n">NewsEnvironmentAgent</span><span class="p">,</span>
|
<span class="p">{</span><span class="s1">'agent_class'</span><span class="p">:</span> <span class="n">NewsEnvironmentAgent</span><span class="p">,</span>
|
||||||
<span class="s1">'state'</span><span class="p">:</span> <span class="p">{</span>
|
<span class="s1">'state'</span><span class="p">:</span> <span class="p">{</span>
|
||||||
<span class="s1">'event_time'</span><span class="p">:</span> <span class="mi">10</span>
|
<span class="s1">'event_time'</span><span class="p">:</span> <span class="mi">10</span>
|
||||||
<span class="p">}</span>
|
<span class="p">}</span>
|
||||||
|
@ -2,8 +2,12 @@ networkx>=2.5
|
|||||||
numpy
|
numpy
|
||||||
matplotlib
|
matplotlib
|
||||||
pyyaml>=5.1
|
pyyaml>=5.1
|
||||||
pandas>=0.23
|
pandas>=1
|
||||||
SALib>=1.3
|
SALib>=1.3
|
||||||
Jinja2
|
Jinja2
|
||||||
Mesa>=0.8
|
Mesa>=1.2
|
||||||
tsih>=0.1.9
|
pydantic>=1.9
|
||||||
|
sqlalchemy>=1.4
|
||||||
|
typing-extensions>=4.4
|
||||||
|
annotated-types>=0.4
|
||||||
|
tqdm>=4.64
|
||||||
|
@ -1,3 +1,7 @@
|
|||||||
|
[metadata]
|
||||||
|
long_description = file: README.md
|
||||||
|
long_description_content_type = text/markdown
|
||||||
|
|
||||||
[aliases]
|
[aliases]
|
||||||
test=pytest
|
test=pytest
|
||||||
[tool:pytest]
|
[tool:pytest]
|
||||||
|
10
setup.py
@ -44,14 +44,20 @@ setup(
|
|||||||
'Operating System :: MacOS :: MacOS X',
|
'Operating System :: MacOS :: MacOS X',
|
||||||
'Operating System :: Microsoft :: Windows',
|
'Operating System :: Microsoft :: Windows',
|
||||||
'Operating System :: POSIX',
|
'Operating System :: POSIX',
|
||||||
'Programming Language :: Python :: 3'],
|
"Programming Language :: Python :: 3 :: Only",
|
||||||
|
"Programming Language :: Python :: 3.8",
|
||||||
|
"Programming Language :: Python :: 3.9",
|
||||||
|
"Programming Language :: Python :: 3.10",
|
||||||
|
],
|
||||||
install_requires=install_reqs,
|
install_requires=install_reqs,
|
||||||
extras_require=extras_require,
|
extras_require=extras_require,
|
||||||
tests_require=test_reqs,
|
tests_require=test_reqs,
|
||||||
setup_requires=['pytest-runner', ],
|
setup_requires=['pytest-runner', ],
|
||||||
|
pytest_plugins = ['pytest_profiling'],
|
||||||
include_package_data=True,
|
include_package_data=True,
|
||||||
|
python_requires=">=3.8",
|
||||||
entry_points={
|
entry_points={
|
||||||
'console_scripts':
|
'console_scripts':
|
||||||
['soil = soil.__init__:main',
|
['soil = soil.__main__:main',
|
||||||
'soil-web = soil.web.__init__:main']
|
'soil-web = soil.web.__init__:main']
|
||||||
})
|
})
|
||||||
|
@ -1 +1 @@
|
|||||||
0.20.8
|
1.0.0rc2
|
||||||
|
292
soil/__init__.py
@ -1,8 +1,12 @@
|
|||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
import importlib
|
import importlib
|
||||||
|
from importlib.resources import path
|
||||||
import sys
|
import sys
|
||||||
import os
|
import os
|
||||||
import pdb
|
|
||||||
import logging
|
import logging
|
||||||
|
import traceback
|
||||||
|
from contextlib import contextmanager
|
||||||
|
|
||||||
from .version import __version__
|
from .version import __version__
|
||||||
|
|
||||||
@ -11,89 +15,273 @@ try:
|
|||||||
except NameError:
|
except NameError:
|
||||||
basestring = str
|
basestring = str
|
||||||
|
|
||||||
|
from pathlib import Path
|
||||||
|
from .analysis import *
|
||||||
from .agents import *
|
from .agents import *
|
||||||
from . import agents
|
from . import agents
|
||||||
from .simulation import *
|
from .simulation import *
|
||||||
from .environment import Environment
|
from .environment import Environment, EventedEnvironment
|
||||||
|
from .datacollection import SoilCollector
|
||||||
from . import serialization
|
from . import serialization
|
||||||
from . import analysis
|
|
||||||
from .utils import logger
|
from .utils import logger
|
||||||
from .time import *
|
from .time import *
|
||||||
|
from .decorators import *
|
||||||
|
|
||||||
|
|
||||||
|
def main(
|
||||||
|
cfg="simulation.yml",
|
||||||
|
exporters=None,
|
||||||
|
num_processes=1,
|
||||||
|
output="soil_output",
|
||||||
|
*,
|
||||||
|
debug=False,
|
||||||
|
pdb=False,
|
||||||
|
**kwargs,
|
||||||
|
):
|
||||||
|
|
||||||
|
sim = None
|
||||||
|
if isinstance(cfg, Simulation):
|
||||||
|
sim = cfg
|
||||||
|
|
||||||
def main():
|
|
||||||
import argparse
|
import argparse
|
||||||
from . import simulation
|
from . import simulation
|
||||||
|
|
||||||
logger.info('Running SOIL version: {}'.format(__version__))
|
logger.info("Running SOIL version: {}".format(__version__))
|
||||||
|
|
||||||
parser = argparse.ArgumentParser(description='Run a SOIL simulation')
|
parser = argparse.ArgumentParser(description="Run a SOIL simulation")
|
||||||
parser.add_argument('file', type=str,
|
parser.add_argument(
|
||||||
|
"file",
|
||||||
|
type=str,
|
||||||
nargs="?",
|
nargs="?",
|
||||||
default='simulation.yml',
|
default=cfg if sim is None else "",
|
||||||
help='Configuration file for the simulation (e.g., YAML or JSON)')
|
help="Configuration file for the simulation (e.g., YAML or JSON)",
|
||||||
parser.add_argument('--version', action='store_true',
|
)
|
||||||
help='Show version info and exit')
|
parser.add_argument(
|
||||||
parser.add_argument('--module', '-m', type=str,
|
"--version", action="store_true", help="Show version info and exit"
|
||||||
help='file containing the code of any custom agents.')
|
)
|
||||||
parser.add_argument('--dry-run', '--dry', action='store_true',
|
parser.add_argument(
|
||||||
help='Do not store the results of the simulation.')
|
"--module",
|
||||||
parser.add_argument('--pdb', action='store_true',
|
"-m",
|
||||||
help='Use a pdb console in case of exception.')
|
type=str,
|
||||||
parser.add_argument('--graph', '-g', action='store_true',
|
help="file containing the code of any custom agents.",
|
||||||
help='Dump GEXF graph. Defaults to false.')
|
)
|
||||||
parser.add_argument('--csv', action='store_true',
|
parser.add_argument(
|
||||||
help='Dump history in CSV format. Defaults to false.')
|
"--dry-run",
|
||||||
parser.add_argument('--level', type=str,
|
"--dry",
|
||||||
help='Logging level')
|
action="store_true",
|
||||||
parser.add_argument('--output', '-o', type=str, default="soil_output",
|
help="Do not run the simulation",
|
||||||
help='folder to write results to. It defaults to the current directory.')
|
)
|
||||||
parser.add_argument('--synchronous', action='store_true',
|
parser.add_argument(
|
||||||
help='Run trials serially and synchronously instead of in parallel. Defaults to false.')
|
"--no-dump",
|
||||||
parser.add_argument('-e', '--exporter', action='append',
|
action="store_true",
|
||||||
help='Export environment and/or simulations using this exporter')
|
help="Do not store the results of the simulation to disk, show in terminal instead.",
|
||||||
|
)
|
||||||
|
parser.add_argument(
|
||||||
|
"--pdb", action="store_true", help="Use a pdb console in case of exception."
|
||||||
|
)
|
||||||
|
parser.add_argument(
|
||||||
|
"--debug",
|
||||||
|
action="store_true",
|
||||||
|
help="Run a customized version of a pdb console to debug a simulation.",
|
||||||
|
)
|
||||||
|
parser.add_argument(
|
||||||
|
"--graph",
|
||||||
|
"-g",
|
||||||
|
action="store_true",
|
||||||
|
help="Dump each iteration's network topology as a GEXF graph. Defaults to false.",
|
||||||
|
)
|
||||||
|
parser.add_argument(
|
||||||
|
"--csv",
|
||||||
|
action="store_true",
|
||||||
|
help="Dump all data collected in CSV format. Defaults to false.",
|
||||||
|
)
|
||||||
|
parser.add_argument("--level", type=str, help="Logging level")
|
||||||
|
parser.add_argument(
|
||||||
|
"--output",
|
||||||
|
"-o",
|
||||||
|
type=str,
|
||||||
|
default=output or "soil_output",
|
||||||
|
help="folder to write results to. It defaults to the current directory.",
|
||||||
|
)
|
||||||
|
parser.add_argument(
|
||||||
|
"--num-processes",
|
||||||
|
default=num_processes,
|
||||||
|
help="Number of processes to use for parallel execution. Defaults to 1.",
|
||||||
|
)
|
||||||
|
|
||||||
|
parser.add_argument(
|
||||||
|
"-e",
|
||||||
|
"--exporter",
|
||||||
|
action="append",
|
||||||
|
default=[],
|
||||||
|
help="Export environment and/or simulations using this exporter",
|
||||||
|
)
|
||||||
|
parser.add_argument(
|
||||||
|
"--max_time",
|
||||||
|
default="-1",
|
||||||
|
help="Set maximum time for the simulation to run. ",
|
||||||
|
)
|
||||||
|
|
||||||
|
parser.add_argument(
|
||||||
|
"--max_steps",
|
||||||
|
default="-1",
|
||||||
|
help="Set maximum number of steps for the simulation to run.",
|
||||||
|
)
|
||||||
|
|
||||||
|
parser.add_argument(
|
||||||
|
"--iterations",
|
||||||
|
default="",
|
||||||
|
help="Set maximum number of iterations (runs) for the simulation.",
|
||||||
|
)
|
||||||
|
|
||||||
|
parser.add_argument(
|
||||||
|
"--seed",
|
||||||
|
default=None,
|
||||||
|
help="Manually set a seed for the simulation.",
|
||||||
|
)
|
||||||
|
|
||||||
|
parser.add_argument(
|
||||||
|
"--only-convert",
|
||||||
|
"--convert",
|
||||||
|
action="store_true",
|
||||||
|
help="Do not run the simulation, only convert the configuration file(s) and output them.",
|
||||||
|
)
|
||||||
|
|
||||||
|
parser.add_argument(
|
||||||
|
"--set",
|
||||||
|
metavar="KEY=VALUE",
|
||||||
|
action="append",
|
||||||
|
help="Set a number of parameters that will be passed to the simulation."
|
||||||
|
"(do not put spaces before or after the = sign). "
|
||||||
|
"If a value contains spaces, you should define "
|
||||||
|
"it with double quotes: "
|
||||||
|
'foo="this is a sentence". Note that '
|
||||||
|
"values are always treated as strings.",
|
||||||
|
)
|
||||||
|
|
||||||
args = parser.parse_args()
|
args = parser.parse_args()
|
||||||
logging.basicConfig(level=getattr(logging, (args.level or 'INFO').upper()))
|
level = getattr(logging, (args.level or "INFO").upper())
|
||||||
|
logger.setLevel(level)
|
||||||
|
|
||||||
if args.version:
|
if args.version:
|
||||||
return
|
return
|
||||||
|
|
||||||
|
exporters = exporters or [
|
||||||
|
"default",
|
||||||
|
]
|
||||||
|
for exp in args.exporter:
|
||||||
|
if exp not in exporters:
|
||||||
|
exporters.append(exp)
|
||||||
|
if args.csv:
|
||||||
|
exporters.append("csv")
|
||||||
|
if args.graph:
|
||||||
|
exporters.append("gexf")
|
||||||
|
|
||||||
if os.getcwd() not in sys.path:
|
if os.getcwd() not in sys.path:
|
||||||
sys.path.append(os.getcwd())
|
sys.path.append(os.getcwd())
|
||||||
if args.module:
|
if args.module:
|
||||||
importlib.import_module(args.module)
|
importlib.import_module(args.module)
|
||||||
|
if output is None:
|
||||||
|
output = args.output
|
||||||
|
|
||||||
logger.info('Loading config file: {}'.format(args.file))
|
debug = debug or args.debug
|
||||||
|
|
||||||
if args.pdb:
|
if args.pdb or debug:
|
||||||
args.synchronous = True
|
args.synchronous = True
|
||||||
|
os.environ["SOIL_POSTMORTEM"] = "true"
|
||||||
|
|
||||||
|
res = []
|
||||||
try:
|
try:
|
||||||
exporters = list(args.exporter or ['default', ])
|
|
||||||
if args.csv:
|
|
||||||
exporters.append('csv')
|
|
||||||
if args.graph:
|
|
||||||
exporters.append('gexf')
|
|
||||||
exp_params = {}
|
exp_params = {}
|
||||||
if args.dry_run:
|
opts = dict(
|
||||||
exp_params['copy_to'] = sys.stdout
|
|
||||||
|
|
||||||
if not os.path.exists(args.file):
|
|
||||||
logger.error('Please, input a valid file')
|
|
||||||
return
|
|
||||||
simulation.run_from_config(args.file,
|
|
||||||
dry_run=args.dry_run,
|
dry_run=args.dry_run,
|
||||||
|
dump=not args.no_dump,
|
||||||
|
debug=debug,
|
||||||
exporters=exporters,
|
exporters=exporters,
|
||||||
parallel=(not args.synchronous),
|
num_processes=args.num_processes,
|
||||||
outdir=args.output,
|
level=level,
|
||||||
exporter_params=exp_params)
|
outdir=output,
|
||||||
except Exception:
|
exporter_params=exp_params,
|
||||||
if args.pdb:
|
**kwargs)
|
||||||
pdb.post_mortem()
|
if args.seed is not None:
|
||||||
|
opts["seed"] = args.seed
|
||||||
|
if args.iterations:
|
||||||
|
opts["iterations"] =int(args.iterations)
|
||||||
|
|
||||||
|
if sim:
|
||||||
|
logger.info("Loading simulation instance")
|
||||||
|
for (k, v) in opts.items():
|
||||||
|
setattr(sim, k, v)
|
||||||
|
sims = [sim]
|
||||||
else:
|
else:
|
||||||
|
logger.info("Loading config file: {}".format(args.file))
|
||||||
|
if not os.path.exists(args.file):
|
||||||
|
logger.error("Please, input a valid file")
|
||||||
|
return
|
||||||
|
|
||||||
|
assert opts["debug"] == debug
|
||||||
|
sims = list(
|
||||||
|
simulation.iter_from_file(
|
||||||
|
args.file,
|
||||||
|
**opts,
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
for sim in sims:
|
||||||
|
assert sim.debug == debug
|
||||||
|
|
||||||
|
if args.set:
|
||||||
|
for s in args.set:
|
||||||
|
k, v = s.split("=", 1)[:2]
|
||||||
|
v = eval(v)
|
||||||
|
tail, *head = k.rsplit(".", 1)[::-1]
|
||||||
|
target = sim.parameters
|
||||||
|
if head:
|
||||||
|
for part in head[0].split("."):
|
||||||
|
try:
|
||||||
|
target = getattr(target, part)
|
||||||
|
except AttributeError:
|
||||||
|
target = target[part]
|
||||||
|
try:
|
||||||
|
setattr(target, tail, v)
|
||||||
|
except AttributeError:
|
||||||
|
target[tail] = v
|
||||||
|
|
||||||
|
if args.only_convert:
|
||||||
|
print(sim.to_yaml())
|
||||||
|
continue
|
||||||
|
max_time = float(args.max_time) if args.max_time != "-1" else None
|
||||||
|
max_steps = float(args.max_steps) if args.max_steps != "-1" else None
|
||||||
|
res.append(sim.run(max_time=max_time, max_steps=max_steps))
|
||||||
|
|
||||||
|
except Exception as ex:
|
||||||
|
if args.pdb:
|
||||||
|
from .debugging import post_mortem
|
||||||
|
|
||||||
|
print(traceback.format_exc())
|
||||||
|
post_mortem()
|
||||||
|
else:
|
||||||
|
raise
|
||||||
|
if debug:
|
||||||
|
from .debugging import set_trace
|
||||||
|
|
||||||
|
os.environ["SOIL_DEBUG"] = "true"
|
||||||
|
set_trace()
|
||||||
|
return res
|
||||||
|
|
||||||
|
|
||||||
|
@contextmanager
|
||||||
|
def easy(cfg, pdb=False, debug=False, **kwargs):
|
||||||
|
try:
|
||||||
|
return main(cfg, debug=debug, pdb=pdb, **kwargs)[0]
|
||||||
|
except Exception as e:
|
||||||
|
if os.environ.get("SOIL_POSTMORTEM"):
|
||||||
|
from .debugging import post_mortem
|
||||||
|
|
||||||
|
print(traceback.format_exc())
|
||||||
|
post_mortem()
|
||||||
raise
|
raise
|
||||||
|
|
||||||
|
|
||||||
if __name__ == '__main__':
|
if __name__ == "__main__":
|
||||||
main()
|
main()
|
||||||
|
@ -1,4 +1,9 @@
|
|||||||
from . import main
|
from . import main as init_main
|
||||||
|
|
||||||
if __name__ == '__main__':
|
|
||||||
main()
|
def main():
|
||||||
|
init_main()
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
init_main()
|
||||||
|
@ -1,4 +1,3 @@
|
|||||||
import random
|
|
||||||
from . import FSM, state, default_state
|
from . import FSM, state, default_state
|
||||||
|
|
||||||
|
|
||||||
@ -8,6 +7,7 @@ class BassModel(FSM):
|
|||||||
innovation_prob
|
innovation_prob
|
||||||
imitation_prob
|
imitation_prob
|
||||||
"""
|
"""
|
||||||
|
|
||||||
sentimentCorrelation = 0
|
sentimentCorrelation = 0
|
||||||
|
|
||||||
def step(self):
|
def step(self):
|
||||||
@ -16,13 +16,13 @@ class BassModel(FSM):
|
|||||||
@default_state
|
@default_state
|
||||||
@state
|
@state
|
||||||
def innovation(self):
|
def innovation(self):
|
||||||
if random.random() < self.innovation_prob:
|
if self.prob(self.innovation_prob):
|
||||||
self.sentimentCorrelation = 1
|
self.sentimentCorrelation = 1
|
||||||
return self.aware
|
return self.aware
|
||||||
else:
|
else:
|
||||||
aware_neighbors = self.get_neighboring_agents(state_id=self.aware.id)
|
aware_neighbors = self.get_neighbors(state_id=self.aware.id)
|
||||||
num_neighbors_aware = len(aware_neighbors)
|
num_neighbors_aware = len(aware_neighbors)
|
||||||
if random.random() < (self['imitation_prob']*num_neighbors_aware):
|
if self.prob((self.imitation_prob * num_neighbors_aware)):
|
||||||
self.sentimentCorrelation = 1
|
self.sentimentCorrelation = 1
|
||||||
return self.aware
|
return self.aware
|
||||||
|
|
||||||
|
@ -1,95 +0,0 @@
|
|||||||
import random
|
|
||||||
from . import FSM, state, default_state
|
|
||||||
|
|
||||||
|
|
||||||
class BigMarketModel(FSM):
|
|
||||||
"""
|
|
||||||
Settings:
|
|
||||||
Names:
|
|
||||||
enterprises [Array]
|
|
||||||
|
|
||||||
tweet_probability_enterprises [Array]
|
|
||||||
Users:
|
|
||||||
tweet_probability_users
|
|
||||||
|
|
||||||
tweet_relevant_probability
|
|
||||||
|
|
||||||
tweet_probability_about [Array]
|
|
||||||
|
|
||||||
sentiment_about [Array]
|
|
||||||
"""
|
|
||||||
|
|
||||||
def __init__(self, *args, **kwargs):
|
|
||||||
super().__init__(*args, **kwargs)
|
|
||||||
self.enterprises = self.env.environment_params['enterprises']
|
|
||||||
self.type = ""
|
|
||||||
|
|
||||||
if self.id < len(self.enterprises): # Enterprises
|
|
||||||
self.set_state(self.enterprise.id)
|
|
||||||
self.type = "Enterprise"
|
|
||||||
self.tweet_probability = environment.environment_params['tweet_probability_enterprises'][self.id]
|
|
||||||
else: # normal users
|
|
||||||
self.type = "User"
|
|
||||||
self.set_state(self.user.id)
|
|
||||||
self.tweet_probability = environment.environment_params['tweet_probability_users']
|
|
||||||
self.tweet_relevant_probability = environment.environment_params['tweet_relevant_probability']
|
|
||||||
self.tweet_probability_about = environment.environment_params['tweet_probability_about'] # List
|
|
||||||
self.sentiment_about = environment.environment_params['sentiment_about'] # List
|
|
||||||
|
|
||||||
@state
|
|
||||||
def enterprise(self):
|
|
||||||
|
|
||||||
if random.random() < self.tweet_probability: # Tweets
|
|
||||||
aware_neighbors = self.get_neighboring_agents(state_id=self.number_of_enterprises) # Nodes neighbour users
|
|
||||||
for x in aware_neighbors:
|
|
||||||
if random.uniform(0,10) < 5:
|
|
||||||
x.sentiment_about[self.id] += 0.1 # Increments for enterprise
|
|
||||||
else:
|
|
||||||
x.sentiment_about[self.id] -= 0.1 # Decrements for enterprise
|
|
||||||
|
|
||||||
# Establecemos limites
|
|
||||||
if x.sentiment_about[self.id] > 1:
|
|
||||||
x.sentiment_about[self.id] = 1
|
|
||||||
if x.sentiment_about[self.id]< -1:
|
|
||||||
x.sentiment_about[self.id] = -1
|
|
||||||
|
|
||||||
x.attrs['sentiment_enterprise_%s'% self.enterprises[self.id]] = x.sentiment_about[self.id]
|
|
||||||
|
|
||||||
@state
|
|
||||||
def user(self):
|
|
||||||
if random.random() < self.tweet_probability: # Tweets
|
|
||||||
if random.random() < self.tweet_relevant_probability: # Tweets something relevant
|
|
||||||
# Tweet probability per enterprise
|
|
||||||
for i in range(len(self.enterprises)):
|
|
||||||
random_num = random.random()
|
|
||||||
if random_num < self.tweet_probability_about[i]:
|
|
||||||
# The condition is fulfilled, sentiments are evaluated towards that enterprise
|
|
||||||
if self.sentiment_about[i] < 0:
|
|
||||||
# NEGATIVO
|
|
||||||
self.userTweets("negative",i)
|
|
||||||
elif self.sentiment_about[i] == 0:
|
|
||||||
# NEUTRO
|
|
||||||
pass
|
|
||||||
else:
|
|
||||||
# POSITIVO
|
|
||||||
self.userTweets("positive",i)
|
|
||||||
for i in range(len(self.enterprises)): # So that it never is set to 0 if there are not changes (logs)
|
|
||||||
self.attrs['sentiment_enterprise_%s'% self.enterprises[i]] = self.sentiment_about[i]
|
|
||||||
|
|
||||||
def userTweets(self, sentiment,enterprise):
|
|
||||||
aware_neighbors = self.get_neighboring_agents(state_id=self.number_of_enterprises) # Nodes neighbours users
|
|
||||||
for x in aware_neighbors:
|
|
||||||
if sentiment == "positive":
|
|
||||||
x.sentiment_about[enterprise] +=0.003
|
|
||||||
elif sentiment == "negative":
|
|
||||||
x.sentiment_about[enterprise] -=0.003
|
|
||||||
else:
|
|
||||||
pass
|
|
||||||
|
|
||||||
# Establecemos limites
|
|
||||||
if x.sentiment_about[enterprise] > 1:
|
|
||||||
x.sentiment_about[enterprise] = 1
|
|
||||||
if x.sentiment_about[enterprise] < -1:
|
|
||||||
x.sentiment_about[enterprise] = -1
|
|
||||||
|
|
||||||
x.attrs['sentiment_enterprise_%s'% self.enterprises[enterprise]] = x.sentiment_about[enterprise]
|
|
@ -1,19 +1,29 @@
|
|||||||
from . import NetworkAgent
|
from . import BaseAgent, NetworkAgent
|
||||||
|
|
||||||
|
|
||||||
|
class Ticker(BaseAgent):
|
||||||
|
times = 0
|
||||||
|
|
||||||
|
def step(self):
|
||||||
|
self.times += 1
|
||||||
|
|
||||||
class CounterModel(NetworkAgent):
|
class CounterModel(NetworkAgent):
|
||||||
"""
|
"""
|
||||||
Dummy behaviour. It counts the number of nodes in the network and neighbors
|
Dummy behaviour. It counts the number of nodes in the network and neighbors
|
||||||
in each step and adds it to its state.
|
in each step and adds it to its state.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
|
times = 0
|
||||||
|
neighbors = 0
|
||||||
|
total = 0
|
||||||
|
|
||||||
def step(self):
|
def step(self):
|
||||||
# Outside effects
|
# Outside effects
|
||||||
total = len(list(self.get_agents()))
|
total = len(list(self.model.schedule._agents))
|
||||||
neighbors = len(list(self.get_neighboring_agents()))
|
neighbors = len(list(self.get_neighbors()))
|
||||||
self['times'] = self.get('times', 0) + 1
|
self["times"] = self.get("times", 0) + 1
|
||||||
self['neighbors'] = neighbors
|
self["neighbors"] = neighbors
|
||||||
self['total'] = total
|
self["total"] = total
|
||||||
|
|
||||||
|
|
||||||
class AggregatedCounter(NetworkAgent):
|
class AggregatedCounter(NetworkAgent):
|
||||||
@ -22,17 +32,15 @@ class AggregatedCounter(NetworkAgent):
|
|||||||
in each step and adds it to its state.
|
in each step and adds it to its state.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
defaults = {
|
times = 0
|
||||||
'times': 0,
|
neighbors = 0
|
||||||
'neighbors': 0,
|
total = 0
|
||||||
'total': 0
|
|
||||||
}
|
|
||||||
|
|
||||||
def step(self):
|
def step(self):
|
||||||
# Outside effects
|
# Outside effects
|
||||||
self['times'] += 1
|
self["times"] += 1
|
||||||
neighbors = len(list(self.get_neighboring_agents()))
|
neighbors = len(list(self.get_neighbors()))
|
||||||
self['neighbors'] += neighbors
|
self["neighbors"] += neighbors
|
||||||
total = len(list(self.get_agents()))
|
total = len(list(self.model.schedule.agents))
|
||||||
self['total'] += total
|
self["total"] += total
|
||||||
self.debug('Running for step: {}. Total: {}'.format(self.now, total))
|
self.debug("Running for step: {}. Total: {}".format(self.now, total))
|
||||||
|
@ -1,21 +1,21 @@
|
|||||||
from scipy.spatial import cKDTree as KDTree
|
from scipy.spatial import cKDTree as KDTree
|
||||||
import networkx as nx
|
import networkx as nx
|
||||||
from . import NetworkAgent, as_node
|
from . import NetworkAgent
|
||||||
|
|
||||||
|
|
||||||
class Geo(NetworkAgent):
|
class Geo(NetworkAgent):
|
||||||
'''In this type of network, nodes have a "pos" attribute.'''
|
"""In this type of network, nodes have a "pos" attribute."""
|
||||||
|
|
||||||
def geo_search(self, radius, node=None, center=False, **kwargs):
|
def geo_search(self, radius, center=False, **kwargs):
|
||||||
'''Get a list of nodes whose coordinates are closer than *radius* to *node*.'''
|
"""Get a list of nodes whose coordinates are closer than *radius* to *node*."""
|
||||||
node = as_node(node if node is not None else self)
|
node = self.node_id
|
||||||
|
|
||||||
G = self.subgraph(**kwargs)
|
G = self.subgraph(**kwargs)
|
||||||
|
|
||||||
pos = nx.get_node_attributes(G, 'pos')
|
pos = nx.get_node_attributes(G, "pos")
|
||||||
if not pos:
|
if not pos:
|
||||||
return []
|
return []
|
||||||
nodes, coords = list(zip(*pos.items()))
|
nodes, coords = list(zip(*pos.items()))
|
||||||
kdtree = KDTree(coords) # Cannot provide generator.
|
kdtree = KDTree(coords) # Cannot provide generator.
|
||||||
indices = kdtree.query_ball_point(pos[node], radius)
|
indices = kdtree.query_ball_point(pos[node], radius)
|
||||||
return [nodes[i] for i in indices if center or (nodes[i] != node)]
|
return [nodes[i] for i in indices if center or (nodes[i] != node)]
|
||||||
|
|
||||||
|
@ -1,8 +1,7 @@
|
|||||||
import random
|
from . import Agent, state, default_state
|
||||||
from . import BaseAgent
|
|
||||||
|
|
||||||
|
|
||||||
class IndependentCascadeModel(BaseAgent):
|
class IndependentCascadeModel(Agent):
|
||||||
"""
|
"""
|
||||||
Settings:
|
Settings:
|
||||||
innovation_prob
|
innovation_prob
|
||||||
@ -10,40 +9,22 @@ class IndependentCascadeModel(BaseAgent):
|
|||||||
imitation_prob
|
imitation_prob
|
||||||
"""
|
"""
|
||||||
|
|
||||||
def __init__(self, *args, **kwargs):
|
time_awareness = 0
|
||||||
super().__init__(*args, **kwargs)
|
sentimentCorrelation = 0
|
||||||
self.innovation_prob = self.env.environment_params['innovation_prob']
|
|
||||||
self.imitation_prob = self.env.environment_params['imitation_prob']
|
|
||||||
self.state['time_awareness'] = 0
|
|
||||||
self.state['sentimentCorrelation'] = 0
|
|
||||||
|
|
||||||
def step(self):
|
|
||||||
self.behaviour()
|
|
||||||
|
|
||||||
def behaviour(self):
|
|
||||||
aware_neighbors_1_time_step = []
|
|
||||||
# Outside effects
|
# Outside effects
|
||||||
if random.random() < self.innovation_prob:
|
@default_state
|
||||||
if self.state['id'] == 0:
|
@state
|
||||||
self.state['id'] = 1
|
def outside(self):
|
||||||
self.state['sentimentCorrelation'] = 1
|
if self.prob(self.model.innovation_prob):
|
||||||
self.state['time_awareness'] = self.env.now # To know when they have been infected
|
self.sentimentCorrelation = 1
|
||||||
else:
|
self.time_awareness = self.model.now # To know when they have been infected
|
||||||
pass
|
return self.imitate
|
||||||
|
|
||||||
return
|
@state
|
||||||
|
def imitate(self):
|
||||||
|
aware_neighbors = self.get_neighbors(state_id=1, time_awareness=self.now-1)
|
||||||
|
|
||||||
# Imitation effects
|
if self.prob(self.model.imitation_prob * len(aware_neighbors)):
|
||||||
if self.state['id'] == 0:
|
self.sentimentCorrelation = 1
|
||||||
aware_neighbors = self.get_neighboring_agents(state_id=1)
|
return self.outside
|
||||||
for x in aware_neighbors:
|
|
||||||
if x.state['time_awareness'] == (self.env.now-1):
|
|
||||||
aware_neighbors_1_time_step.append(x)
|
|
||||||
num_neighbors_aware = len(aware_neighbors_1_time_step)
|
|
||||||
if random.random() < (self.imitation_prob*num_neighbors_aware):
|
|
||||||
self.state['id'] = 1
|
|
||||||
self.state['sentimentCorrelation'] = 1
|
|
||||||
else:
|
|
||||||
pass
|
|
||||||
|
|
||||||
return
|
|
@ -1,242 +0,0 @@
|
|||||||
import random
|
|
||||||
import numpy as np
|
|
||||||
from . import BaseAgent
|
|
||||||
|
|
||||||
|
|
||||||
class SpreadModelM2(BaseAgent):
|
|
||||||
"""
|
|
||||||
Settings:
|
|
||||||
prob_neutral_making_denier
|
|
||||||
|
|
||||||
prob_infect
|
|
||||||
|
|
||||||
prob_cured_healing_infected
|
|
||||||
|
|
||||||
prob_cured_vaccinate_neutral
|
|
||||||
|
|
||||||
prob_vaccinated_healing_infected
|
|
||||||
|
|
||||||
prob_vaccinated_vaccinate_neutral
|
|
||||||
|
|
||||||
prob_generate_anti_rumor
|
|
||||||
"""
|
|
||||||
|
|
||||||
def __init__(self, model=None, unique_id=0, state=()):
|
|
||||||
super().__init__(model=environment, unique_id=unique_id, state=state)
|
|
||||||
|
|
||||||
self.prob_neutral_making_denier = np.random.normal(environment.environment_params['prob_neutral_making_denier'],
|
|
||||||
environment.environment_params['standard_variance'])
|
|
||||||
|
|
||||||
self.prob_infect = np.random.normal(environment.environment_params['prob_infect'],
|
|
||||||
environment.environment_params['standard_variance'])
|
|
||||||
|
|
||||||
self.prob_cured_healing_infected = np.random.normal(environment.environment_params['prob_cured_healing_infected'],
|
|
||||||
environment.environment_params['standard_variance'])
|
|
||||||
self.prob_cured_vaccinate_neutral = np.random.normal(environment.environment_params['prob_cured_vaccinate_neutral'],
|
|
||||||
environment.environment_params['standard_variance'])
|
|
||||||
|
|
||||||
self.prob_vaccinated_healing_infected = np.random.normal(environment.environment_params['prob_vaccinated_healing_infected'],
|
|
||||||
environment.environment_params['standard_variance'])
|
|
||||||
self.prob_vaccinated_vaccinate_neutral = np.random.normal(environment.environment_params['prob_vaccinated_vaccinate_neutral'],
|
|
||||||
environment.environment_params['standard_variance'])
|
|
||||||
self.prob_generate_anti_rumor = np.random.normal(environment.environment_params['prob_generate_anti_rumor'],
|
|
||||||
environment.environment_params['standard_variance'])
|
|
||||||
|
|
||||||
def step(self):
|
|
||||||
|
|
||||||
if self.state['id'] == 0: # Neutral
|
|
||||||
self.neutral_behaviour()
|
|
||||||
elif self.state['id'] == 1: # Infected
|
|
||||||
self.infected_behaviour()
|
|
||||||
elif self.state['id'] == 2: # Cured
|
|
||||||
self.cured_behaviour()
|
|
||||||
elif self.state['id'] == 3: # Vaccinated
|
|
||||||
self.vaccinated_behaviour()
|
|
||||||
|
|
||||||
def neutral_behaviour(self):
|
|
||||||
|
|
||||||
# Infected
|
|
||||||
infected_neighbors = self.get_neighboring_agents(state_id=1)
|
|
||||||
if len(infected_neighbors) > 0:
|
|
||||||
if random.random() < self.prob_neutral_making_denier:
|
|
||||||
self.state['id'] = 3 # Vaccinated making denier
|
|
||||||
|
|
||||||
def infected_behaviour(self):
|
|
||||||
|
|
||||||
# Neutral
|
|
||||||
neutral_neighbors = self.get_neighboring_agents(state_id=0)
|
|
||||||
for neighbor in neutral_neighbors:
|
|
||||||
if random.random() < self.prob_infect:
|
|
||||||
neighbor.state['id'] = 1 # Infected
|
|
||||||
|
|
||||||
def cured_behaviour(self):
|
|
||||||
|
|
||||||
# Vaccinate
|
|
||||||
neutral_neighbors = self.get_neighboring_agents(state_id=0)
|
|
||||||
for neighbor in neutral_neighbors:
|
|
||||||
if random.random() < self.prob_cured_vaccinate_neutral:
|
|
||||||
neighbor.state['id'] = 3 # Vaccinated
|
|
||||||
|
|
||||||
# Cure
|
|
||||||
infected_neighbors = self.get_neighboring_agents(state_id=1)
|
|
||||||
for neighbor in infected_neighbors:
|
|
||||||
if random.random() < self.prob_cured_healing_infected:
|
|
||||||
neighbor.state['id'] = 2 # Cured
|
|
||||||
|
|
||||||
def vaccinated_behaviour(self):
|
|
||||||
|
|
||||||
# Cure
|
|
||||||
infected_neighbors = self.get_neighboring_agents(state_id=1)
|
|
||||||
for neighbor in infected_neighbors:
|
|
||||||
if random.random() < self.prob_cured_healing_infected:
|
|
||||||
neighbor.state['id'] = 2 # Cured
|
|
||||||
|
|
||||||
# Vaccinate
|
|
||||||
neutral_neighbors = self.get_neighboring_agents(state_id=0)
|
|
||||||
for neighbor in neutral_neighbors:
|
|
||||||
if random.random() < self.prob_cured_vaccinate_neutral:
|
|
||||||
neighbor.state['id'] = 3 # Vaccinated
|
|
||||||
|
|
||||||
# Generate anti-rumor
|
|
||||||
infected_neighbors_2 = self.get_neighboring_agents(state_id=1)
|
|
||||||
for neighbor in infected_neighbors_2:
|
|
||||||
if random.random() < self.prob_generate_anti_rumor:
|
|
||||||
neighbor.state['id'] = 2 # Cured
|
|
||||||
|
|
||||||
|
|
||||||
class ControlModelM2(BaseAgent):
|
|
||||||
"""
|
|
||||||
Settings:
|
|
||||||
prob_neutral_making_denier
|
|
||||||
|
|
||||||
prob_infect
|
|
||||||
|
|
||||||
prob_cured_healing_infected
|
|
||||||
|
|
||||||
prob_cured_vaccinate_neutral
|
|
||||||
|
|
||||||
prob_vaccinated_healing_infected
|
|
||||||
|
|
||||||
prob_vaccinated_vaccinate_neutral
|
|
||||||
|
|
||||||
prob_generate_anti_rumor
|
|
||||||
"""
|
|
||||||
|
|
||||||
|
|
||||||
def __init__(self, model=None, unique_id=0, state=()):
|
|
||||||
super().__init__(model=environment, unique_id=unique_id, state=state)
|
|
||||||
|
|
||||||
self.prob_neutral_making_denier = np.random.normal(environment.environment_params['prob_neutral_making_denier'],
|
|
||||||
environment.environment_params['standard_variance'])
|
|
||||||
|
|
||||||
self.prob_infect = np.random.normal(environment.environment_params['prob_infect'],
|
|
||||||
environment.environment_params['standard_variance'])
|
|
||||||
|
|
||||||
self.prob_cured_healing_infected = np.random.normal(environment.environment_params['prob_cured_healing_infected'],
|
|
||||||
environment.environment_params['standard_variance'])
|
|
||||||
self.prob_cured_vaccinate_neutral = np.random.normal(environment.environment_params['prob_cured_vaccinate_neutral'],
|
|
||||||
environment.environment_params['standard_variance'])
|
|
||||||
|
|
||||||
self.prob_vaccinated_healing_infected = np.random.normal(environment.environment_params['prob_vaccinated_healing_infected'],
|
|
||||||
environment.environment_params['standard_variance'])
|
|
||||||
self.prob_vaccinated_vaccinate_neutral = np.random.normal(environment.environment_params['prob_vaccinated_vaccinate_neutral'],
|
|
||||||
environment.environment_params['standard_variance'])
|
|
||||||
self.prob_generate_anti_rumor = np.random.normal(environment.environment_params['prob_generate_anti_rumor'],
|
|
||||||
environment.environment_params['standard_variance'])
|
|
||||||
|
|
||||||
def step(self):
|
|
||||||
|
|
||||||
if self.state['id'] == 0: # Neutral
|
|
||||||
self.neutral_behaviour()
|
|
||||||
elif self.state['id'] == 1: # Infected
|
|
||||||
self.infected_behaviour()
|
|
||||||
elif self.state['id'] == 2: # Cured
|
|
||||||
self.cured_behaviour()
|
|
||||||
elif self.state['id'] == 3: # Vaccinated
|
|
||||||
self.vaccinated_behaviour()
|
|
||||||
elif self.state['id'] == 4: # Beacon-off
|
|
||||||
self.beacon_off_behaviour()
|
|
||||||
elif self.state['id'] == 5: # Beacon-on
|
|
||||||
self.beacon_on_behaviour()
|
|
||||||
|
|
||||||
def neutral_behaviour(self):
|
|
||||||
self.state['visible'] = False
|
|
||||||
|
|
||||||
# Infected
|
|
||||||
infected_neighbors = self.get_neighboring_agents(state_id=1)
|
|
||||||
if len(infected_neighbors) > 0:
|
|
||||||
if random.random() < self.prob_neutral_making_denier:
|
|
||||||
self.state['id'] = 3 # Vaccinated making denier
|
|
||||||
|
|
||||||
def infected_behaviour(self):
|
|
||||||
|
|
||||||
# Neutral
|
|
||||||
neutral_neighbors = self.get_neighboring_agents(state_id=0)
|
|
||||||
for neighbor in neutral_neighbors:
|
|
||||||
if random.random() < self.prob_infect:
|
|
||||||
neighbor.state['id'] = 1 # Infected
|
|
||||||
self.state['visible'] = False
|
|
||||||
|
|
||||||
def cured_behaviour(self):
|
|
||||||
|
|
||||||
self.state['visible'] = True
|
|
||||||
# Vaccinate
|
|
||||||
neutral_neighbors = self.get_neighboring_agents(state_id=0)
|
|
||||||
for neighbor in neutral_neighbors:
|
|
||||||
if random.random() < self.prob_cured_vaccinate_neutral:
|
|
||||||
neighbor.state['id'] = 3 # Vaccinated
|
|
||||||
|
|
||||||
# Cure
|
|
||||||
infected_neighbors = self.get_neighboring_agents(state_id=1)
|
|
||||||
for neighbor in infected_neighbors:
|
|
||||||
if random.random() < self.prob_cured_healing_infected:
|
|
||||||
neighbor.state['id'] = 2 # Cured
|
|
||||||
|
|
||||||
def vaccinated_behaviour(self):
|
|
||||||
self.state['visible'] = True
|
|
||||||
|
|
||||||
# Cure
|
|
||||||
infected_neighbors = self.get_neighboring_agents(state_id=1)
|
|
||||||
for neighbor in infected_neighbors:
|
|
||||||
if random.random() < self.prob_cured_healing_infected:
|
|
||||||
neighbor.state['id'] = 2 # Cured
|
|
||||||
|
|
||||||
# Vaccinate
|
|
||||||
neutral_neighbors = self.get_neighboring_agents(state_id=0)
|
|
||||||
for neighbor in neutral_neighbors:
|
|
||||||
if random.random() < self.prob_cured_vaccinate_neutral:
|
|
||||||
neighbor.state['id'] = 3 # Vaccinated
|
|
||||||
|
|
||||||
# Generate anti-rumor
|
|
||||||
infected_neighbors_2 = self.get_neighboring_agents(state_id=1)
|
|
||||||
for neighbor in infected_neighbors_2:
|
|
||||||
if random.random() < self.prob_generate_anti_rumor:
|
|
||||||
neighbor.state['id'] = 2 # Cured
|
|
||||||
|
|
||||||
def beacon_off_behaviour(self):
|
|
||||||
self.state['visible'] = False
|
|
||||||
infected_neighbors = self.get_neighboring_agents(state_id=1)
|
|
||||||
if len(infected_neighbors) > 0:
|
|
||||||
self.state['id'] == 5 # Beacon on
|
|
||||||
|
|
||||||
def beacon_on_behaviour(self):
|
|
||||||
self.state['visible'] = False
|
|
||||||
# Cure (M2 feature added)
|
|
||||||
infected_neighbors = self.get_neighboring_agents(state_id=1)
|
|
||||||
for neighbor in infected_neighbors:
|
|
||||||
if random.random() < self.prob_generate_anti_rumor:
|
|
||||||
neighbor.state['id'] = 2 # Cured
|
|
||||||
neutral_neighbors_infected = neighbor.get_neighboring_agents(state_id=0)
|
|
||||||
for neighbor in neutral_neighbors_infected:
|
|
||||||
if random.random() < self.prob_generate_anti_rumor:
|
|
||||||
neighbor.state['id'] = 3 # Vaccinated
|
|
||||||
infected_neighbors_infected = neighbor.get_neighboring_agents(state_id=1)
|
|
||||||
for neighbor in infected_neighbors_infected:
|
|
||||||
if random.random() < self.prob_generate_anti_rumor:
|
|
||||||
neighbor.state['id'] = 2 # Cured
|
|
||||||
|
|
||||||
# Vaccinate
|
|
||||||
neutral_neighbors = self.get_neighboring_agents(state_id=0)
|
|
||||||
for neighbor in neutral_neighbors:
|
|
||||||
if random.random() < self.prob_cured_vaccinate_neutral:
|
|
||||||
neighbor.state['id'] = 3 # Vaccinated
|
|
@ -1,9 +1,9 @@
|
|||||||
import random
|
|
||||||
import numpy as np
|
import numpy as np
|
||||||
from . import FSM, state
|
from hashlib import sha512
|
||||||
|
from . import Agent, state, default_state
|
||||||
|
|
||||||
|
|
||||||
class SISaModel(FSM):
|
class SISaModel(Agent):
|
||||||
"""
|
"""
|
||||||
Settings:
|
Settings:
|
||||||
neutral_discontent_spon_prob
|
neutral_discontent_spon_prob
|
||||||
@ -29,65 +29,82 @@ class SISaModel(FSM):
|
|||||||
standard_variance
|
standard_variance
|
||||||
"""
|
"""
|
||||||
|
|
||||||
def __init__(self, environment, unique_id=0, state=()):
|
def __init__(self, *args, **kwargs):
|
||||||
super().__init__(model=environment, unique_id=unique_id, state=state)
|
super().__init__(*args, **kwargs)
|
||||||
|
|
||||||
self.neutral_discontent_spon_prob = np.random.normal(self.env['neutral_discontent_spon_prob'],
|
seed = self.model._seed
|
||||||
self.env['standard_variance'])
|
if isinstance(seed, (str, bytes, bytearray)):
|
||||||
self.neutral_discontent_infected_prob = np.random.normal(self.env['neutral_discontent_infected_prob'],
|
if isinstance(seed, str):
|
||||||
self.env['standard_variance'])
|
seed = seed.encode()
|
||||||
self.neutral_content_spon_prob = np.random.normal(self.env['neutral_content_spon_prob'],
|
seed = int.from_bytes(seed + sha512(seed).digest(), 'big')
|
||||||
self.env['standard_variance'])
|
|
||||||
self.neutral_content_infected_prob = np.random.normal(self.env['neutral_content_infected_prob'],
|
|
||||||
self.env['standard_variance'])
|
|
||||||
|
|
||||||
self.discontent_neutral = np.random.normal(self.env['discontent_neutral'],
|
random = np.random.default_rng(seed=seed)
|
||||||
self.env['standard_variance'])
|
|
||||||
self.discontent_content = np.random.normal(self.env['discontent_content'],
|
|
||||||
self.env['variance_d_c'])
|
|
||||||
|
|
||||||
self.content_discontent = np.random.normal(self.env['content_discontent'],
|
self.neutral_discontent_spon_prob = random.normal(
|
||||||
self.env['variance_c_d'])
|
self.model.neutral_discontent_spon_prob, self.model.standard_variance
|
||||||
self.content_neutral = np.random.normal(self.env['content_neutral'],
|
)
|
||||||
self.env['standard_variance'])
|
self.neutral_discontent_infected_prob = random.normal(
|
||||||
|
self.model.neutral_discontent_infected_prob, self.model.standard_variance
|
||||||
|
)
|
||||||
|
self.neutral_content_spon_prob = random.normal(
|
||||||
|
self.model.neutral_content_spon_prob, self.model.standard_variance
|
||||||
|
)
|
||||||
|
self.neutral_content_infected_prob = random.normal(
|
||||||
|
self.model.neutral_content_infected_prob, self.model.standard_variance
|
||||||
|
)
|
||||||
|
|
||||||
|
self.discontent_neutral = random.normal(
|
||||||
|
self.model.discontent_neutral, self.model.standard_variance
|
||||||
|
)
|
||||||
|
self.discontent_content = random.normal(
|
||||||
|
self.model.discontent_content, self.model.variance_d_c
|
||||||
|
)
|
||||||
|
|
||||||
|
self.content_discontent = random.normal(
|
||||||
|
self.model.content_discontent, self.model.variance_c_d
|
||||||
|
)
|
||||||
|
self.content_neutral = random.normal(
|
||||||
|
self.model.discontent_neutral, self.model.standard_variance
|
||||||
|
)
|
||||||
|
|
||||||
|
@default_state
|
||||||
@state
|
@state
|
||||||
def neutral(self):
|
def neutral(self):
|
||||||
# Spontaneous effects
|
# Spontaneous effects
|
||||||
if random.random() < self.neutral_discontent_spon_prob:
|
if self.prob(self.neutral_discontent_spon_prob):
|
||||||
return self.discontent
|
return self.discontent
|
||||||
if random.random() < self.neutral_content_spon_prob:
|
if self.prob(self.neutral_content_spon_prob):
|
||||||
return self.content
|
return self.content
|
||||||
|
|
||||||
# Infected
|
# Infected
|
||||||
discontent_neighbors = self.count_neighboring_agents(state_id=self.discontent)
|
discontent_neighbors = self.count_neighbors(state_id=self.discontent)
|
||||||
if random.random() < discontent_neighbors * self.neutral_discontent_infected_prob:
|
if self.prob(discontent_neighbors * self.neutral_discontent_infected_prob):
|
||||||
return self.discontent
|
return self.discontent
|
||||||
content_neighbors = self.count_neighboring_agents(state_id=self.content.id)
|
content_neighbors = self.count_neighbors(state_id=self.content.id)
|
||||||
if random.random() < content_neighbors * self.neutral_content_infected_prob:
|
if self.prob(content_neighbors * self.neutral_content_infected_prob):
|
||||||
return self.content
|
return self.content
|
||||||
return self.neutral
|
return self.neutral
|
||||||
|
|
||||||
@state
|
@state
|
||||||
def discontent(self):
|
def discontent(self):
|
||||||
# Healing
|
# Healing
|
||||||
if random.random() < self.discontent_neutral:
|
if self.prob(self.discontent_neutral):
|
||||||
return self.neutral
|
return self.neutral
|
||||||
|
|
||||||
# Superinfected
|
# Superinfected
|
||||||
content_neighbors = self.count_neighboring_agents(state_id=self.content.id)
|
content_neighbors = self.count_neighbors(state_id=self.content.id)
|
||||||
if random.random() < content_neighbors * self.discontent_content:
|
if self.prob(content_neighbors * self.discontent_content):
|
||||||
return self.content
|
return self.content
|
||||||
return self.discontent
|
return self.discontent
|
||||||
|
|
||||||
@state
|
@state
|
||||||
def content(self):
|
def content(self):
|
||||||
# Healing
|
# Healing
|
||||||
if random.random() < self.content_neutral:
|
if self.prob(self.content_neutral):
|
||||||
return self.neutral
|
return self.neutral
|
||||||
|
|
||||||
# Superinfected
|
# Superinfected
|
||||||
discontent_neighbors = self.count_neighboring_agents(state_id=self.discontent.id)
|
discontent_neighbors = self.count_neighbors(state_id=self.discontent.id)
|
||||||
if random.random() < discontent_neighbors * self.content_discontent:
|
if self.prob(discontent_neighbors * self.content_discontent):
|
||||||
self.discontent
|
self.discontent
|
||||||
return self.content
|
return self.content
|
||||||
|
@ -1,102 +0,0 @@
|
|||||||
import random
|
|
||||||
from . import BaseAgent
|
|
||||||
|
|
||||||
|
|
||||||
class SentimentCorrelationModel(BaseAgent):
|
|
||||||
"""
|
|
||||||
Settings:
|
|
||||||
outside_effects_prob
|
|
||||||
|
|
||||||
anger_prob
|
|
||||||
|
|
||||||
joy_prob
|
|
||||||
|
|
||||||
sadness_prob
|
|
||||||
|
|
||||||
disgust_prob
|
|
||||||
"""
|
|
||||||
|
|
||||||
def __init__(self, environment, unique_id=0, state=()):
|
|
||||||
super().__init__(model=environment, unique_id=unique_id, state=state)
|
|
||||||
self.outside_effects_prob = environment.environment_params['outside_effects_prob']
|
|
||||||
self.anger_prob = environment.environment_params['anger_prob']
|
|
||||||
self.joy_prob = environment.environment_params['joy_prob']
|
|
||||||
self.sadness_prob = environment.environment_params['sadness_prob']
|
|
||||||
self.disgust_prob = environment.environment_params['disgust_prob']
|
|
||||||
self.state['time_awareness'] = []
|
|
||||||
for i in range(4): # In this model we have 4 sentiments
|
|
||||||
self.state['time_awareness'].append(0) # 0-> Anger, 1-> joy, 2->sadness, 3 -> disgust
|
|
||||||
self.state['sentimentCorrelation'] = 0
|
|
||||||
|
|
||||||
def step(self):
|
|
||||||
self.behaviour()
|
|
||||||
|
|
||||||
def behaviour(self):
|
|
||||||
|
|
||||||
angry_neighbors_1_time_step = []
|
|
||||||
joyful_neighbors_1_time_step = []
|
|
||||||
sad_neighbors_1_time_step = []
|
|
||||||
disgusted_neighbors_1_time_step = []
|
|
||||||
|
|
||||||
angry_neighbors = self.get_neighboring_agents(state_id=1)
|
|
||||||
for x in angry_neighbors:
|
|
||||||
if x.state['time_awareness'][0] > (self.env.now-500):
|
|
||||||
angry_neighbors_1_time_step.append(x)
|
|
||||||
num_neighbors_angry = len(angry_neighbors_1_time_step)
|
|
||||||
|
|
||||||
joyful_neighbors = self.get_neighboring_agents(state_id=2)
|
|
||||||
for x in joyful_neighbors:
|
|
||||||
if x.state['time_awareness'][1] > (self.env.now-500):
|
|
||||||
joyful_neighbors_1_time_step.append(x)
|
|
||||||
num_neighbors_joyful = len(joyful_neighbors_1_time_step)
|
|
||||||
|
|
||||||
sad_neighbors = self.get_neighboring_agents(state_id=3)
|
|
||||||
for x in sad_neighbors:
|
|
||||||
if x.state['time_awareness'][2] > (self.env.now-500):
|
|
||||||
sad_neighbors_1_time_step.append(x)
|
|
||||||
num_neighbors_sad = len(sad_neighbors_1_time_step)
|
|
||||||
|
|
||||||
disgusted_neighbors = self.get_neighboring_agents(state_id=4)
|
|
||||||
for x in disgusted_neighbors:
|
|
||||||
if x.state['time_awareness'][3] > (self.env.now-500):
|
|
||||||
disgusted_neighbors_1_time_step.append(x)
|
|
||||||
num_neighbors_disgusted = len(disgusted_neighbors_1_time_step)
|
|
||||||
|
|
||||||
anger_prob = self.anger_prob+(len(angry_neighbors_1_time_step)*self.anger_prob)
|
|
||||||
joy_prob = self.joy_prob+(len(joyful_neighbors_1_time_step)*self.joy_prob)
|
|
||||||
sadness_prob = self.sadness_prob+(len(sad_neighbors_1_time_step)*self.sadness_prob)
|
|
||||||
disgust_prob = self.disgust_prob+(len(disgusted_neighbors_1_time_step)*self.disgust_prob)
|
|
||||||
outside_effects_prob = self.outside_effects_prob
|
|
||||||
|
|
||||||
num = random.random()
|
|
||||||
|
|
||||||
if num<outside_effects_prob:
|
|
||||||
self.state['id'] = random.randint(1, 4)
|
|
||||||
|
|
||||||
self.state['sentimentCorrelation'] = self.state['id'] # It is stored when it has been infected for the dynamic network
|
|
||||||
self.state['time_awareness'][self.state['id']-1] = self.env.now
|
|
||||||
self.state['sentiment'] = self.state['id']
|
|
||||||
|
|
||||||
|
|
||||||
if(num<anger_prob):
|
|
||||||
|
|
||||||
self.state['id'] = 1
|
|
||||||
self.state['sentimentCorrelation'] = 1
|
|
||||||
self.state['time_awareness'][self.state['id']-1] = self.env.now
|
|
||||||
elif (num<joy_prob+anger_prob and num>anger_prob):
|
|
||||||
|
|
||||||
self.state['id'] = 2
|
|
||||||
self.state['sentimentCorrelation'] = 2
|
|
||||||
self.state['time_awareness'][self.state['id']-1] = self.env.now
|
|
||||||
elif (num<sadness_prob+anger_prob+joy_prob and num>joy_prob+anger_prob):
|
|
||||||
|
|
||||||
self.state['id'] = 3
|
|
||||||
self.state['sentimentCorrelation'] = 3
|
|
||||||
self.state['time_awareness'][self.state['id']-1] = self.env.now
|
|
||||||
elif (num<disgust_prob+sadness_prob+anger_prob+joy_prob and num>sadness_prob+anger_prob+joy_prob):
|
|
||||||
|
|
||||||
self.state['id'] = 4
|
|
||||||
self.state['sentimentCorrelation'] = 4
|
|
||||||
self.state['time_awareness'][self.state['id']-1] = self.env.now
|
|
||||||
|
|
||||||
self.state['sentiment'] = self.state['id']
|
|
77
soil/agents/evented.py
Normal file
@ -0,0 +1,77 @@
|
|||||||
|
from . import BaseAgent
|
||||||
|
from ..events import Message, Tell, Ask, TimedOut
|
||||||
|
from ..time import BaseCond
|
||||||
|
from functools import partial
|
||||||
|
from collections import deque
|
||||||
|
|
||||||
|
|
||||||
|
class ReceivedOrTimeout(BaseCond):
|
||||||
|
def __init__(
|
||||||
|
self, agent, expiration=None, timeout=None, check=True, ignore=False, **kwargs
|
||||||
|
):
|
||||||
|
if expiration is None:
|
||||||
|
if timeout is not None:
|
||||||
|
expiration = agent.now + timeout
|
||||||
|
self.expiration = expiration
|
||||||
|
self.ignore = ignore
|
||||||
|
self.check = check
|
||||||
|
super().__init__(**kwargs)
|
||||||
|
|
||||||
|
def expired(self, time):
|
||||||
|
return self.expiration and self.expiration < time
|
||||||
|
|
||||||
|
def ready(self, agent, time):
|
||||||
|
return len(agent._inbox) or self.expired(time)
|
||||||
|
|
||||||
|
def return_value(self, agent):
|
||||||
|
if not self.ignore and self.expired(agent.now):
|
||||||
|
raise TimedOut("No messages received")
|
||||||
|
if self.check:
|
||||||
|
agent.check_messages()
|
||||||
|
return None
|
||||||
|
|
||||||
|
def schedule_next(self, time, delta, first=False):
|
||||||
|
if self._delta is not None:
|
||||||
|
delta = self._delta
|
||||||
|
return (time + delta, self)
|
||||||
|
|
||||||
|
def __repr__(self):
|
||||||
|
return f"ReceivedOrTimeout(expires={self.expiration})"
|
||||||
|
|
||||||
|
|
||||||
|
class EventedAgent(BaseAgent):
|
||||||
|
def __init__(self, *args, **kwargs):
|
||||||
|
super().__init__(*args, **kwargs)
|
||||||
|
self._inbox = deque()
|
||||||
|
self._processed = 0
|
||||||
|
|
||||||
|
def on_receive(self, *args, **kwargs):
|
||||||
|
pass
|
||||||
|
|
||||||
|
def received(self, *args, **kwargs):
|
||||||
|
return ReceivedOrTimeout(self, *args, **kwargs)
|
||||||
|
|
||||||
|
def tell(self, msg, sender=None):
|
||||||
|
self._inbox.append(Tell(timestamp=self.now, payload=msg, sender=sender))
|
||||||
|
|
||||||
|
def ask(self, msg, timeout=None, **kwargs):
|
||||||
|
ask = Ask(timestamp=self.now, payload=msg, sender=self)
|
||||||
|
self._inbox.append(ask)
|
||||||
|
expiration = float("inf") if timeout is None else self.now + timeout
|
||||||
|
return ask.replied(expiration=expiration, **kwargs)
|
||||||
|
|
||||||
|
def check_messages(self):
|
||||||
|
changed = False
|
||||||
|
while self._inbox:
|
||||||
|
msg = self._inbox.popleft()
|
||||||
|
self._processed += 1
|
||||||
|
if msg.expired(self.now):
|
||||||
|
continue
|
||||||
|
changed = True
|
||||||
|
reply = self.on_receive(msg.payload, sender=msg.sender)
|
||||||
|
if isinstance(msg, Ask):
|
||||||
|
msg.reply = reply
|
||||||
|
return changed
|
||||||
|
|
||||||
|
|
||||||
|
Evented = EventedAgent
|
148
soil/agents/fsm.py
Normal file
@ -0,0 +1,148 @@
|
|||||||
|
from . import MetaAgent, BaseAgent
|
||||||
|
from ..time import Delta
|
||||||
|
|
||||||
|
from functools import partial, wraps
|
||||||
|
import inspect
|
||||||
|
|
||||||
|
|
||||||
|
def state(name=None, default=False):
|
||||||
|
def decorator(func, name=None):
|
||||||
|
"""
|
||||||
|
A state function should return either a state id, or a tuple (state_id, when)
|
||||||
|
The default value for state_id is the current state id.
|
||||||
|
The default value for when is the interval defined in the environment.
|
||||||
|
"""
|
||||||
|
if inspect.isgeneratorfunction(func):
|
||||||
|
orig_func = func
|
||||||
|
|
||||||
|
@wraps(func)
|
||||||
|
def func(self):
|
||||||
|
while True:
|
||||||
|
if not self._coroutine:
|
||||||
|
self._coroutine = orig_func(self)
|
||||||
|
|
||||||
|
try:
|
||||||
|
if self._last_except:
|
||||||
|
n = self._coroutine.throw(self._last_except)
|
||||||
|
else:
|
||||||
|
n = self._coroutine.send(self._last_return)
|
||||||
|
if n:
|
||||||
|
return None, n
|
||||||
|
return n
|
||||||
|
except StopIteration as ex:
|
||||||
|
self._coroutine = None
|
||||||
|
next_state = ex.value
|
||||||
|
if next_state is not None:
|
||||||
|
self._set_state(next_state)
|
||||||
|
return next_state
|
||||||
|
finally:
|
||||||
|
self._last_return = None
|
||||||
|
self._last_except = None
|
||||||
|
|
||||||
|
func.id = name or func.__name__
|
||||||
|
func.is_default = default
|
||||||
|
return func
|
||||||
|
|
||||||
|
if callable(name):
|
||||||
|
return decorator(name)
|
||||||
|
else:
|
||||||
|
return partial(decorator, name=name)
|
||||||
|
|
||||||
|
|
||||||
|
def default_state(func):
|
||||||
|
func.is_default = True
|
||||||
|
return func
|
||||||
|
|
||||||
|
|
||||||
|
class MetaFSM(MetaAgent):
|
||||||
|
def __new__(mcls, name, bases, namespace):
|
||||||
|
states = {}
|
||||||
|
# Re-use states from inherited classes
|
||||||
|
default_state = None
|
||||||
|
for i in bases:
|
||||||
|
if isinstance(i, MetaFSM):
|
||||||
|
for state_id, state in i._states.items():
|
||||||
|
if state.is_default:
|
||||||
|
default_state = state
|
||||||
|
states[state_id] = state
|
||||||
|
|
||||||
|
# Add new states
|
||||||
|
for attr, func in namespace.items():
|
||||||
|
if hasattr(func, "id"):
|
||||||
|
if func.is_default:
|
||||||
|
default_state = func
|
||||||
|
states[func.id] = func
|
||||||
|
|
||||||
|
namespace.update(
|
||||||
|
{
|
||||||
|
"_default_state": default_state,
|
||||||
|
"_states": states,
|
||||||
|
}
|
||||||
|
)
|
||||||
|
|
||||||
|
return super(MetaFSM, mcls).__new__(
|
||||||
|
mcls=mcls, name=name, bases=bases, namespace=namespace
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
class FSM(BaseAgent, metaclass=MetaFSM):
|
||||||
|
def __init__(self, init=True, **kwargs):
|
||||||
|
super().__init__(**kwargs, init=False)
|
||||||
|
if not hasattr(self, "state_id"):
|
||||||
|
if not self._default_state:
|
||||||
|
raise ValueError(
|
||||||
|
"No default state specified for {}".format(self.unique_id)
|
||||||
|
)
|
||||||
|
self.state_id = self._default_state.id
|
||||||
|
|
||||||
|
self._coroutine = None
|
||||||
|
self.default_interval = Delta(self.model.interval)
|
||||||
|
self._set_state(self.state_id)
|
||||||
|
if init:
|
||||||
|
self.init()
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def states(cls):
|
||||||
|
return list(cls._states.keys())
|
||||||
|
|
||||||
|
def step(self):
|
||||||
|
self.debug(f"Agent {self.unique_id} @ state {self.state_id}")
|
||||||
|
|
||||||
|
self._check_alive()
|
||||||
|
next_state = self._states[self.state_id](self)
|
||||||
|
|
||||||
|
when = None
|
||||||
|
try:
|
||||||
|
next_state, *when = next_state
|
||||||
|
if not when:
|
||||||
|
when = None
|
||||||
|
elif len(when) == 1:
|
||||||
|
when = when[0]
|
||||||
|
else:
|
||||||
|
raise ValueError(
|
||||||
|
"Too many values returned. Only state (and time) allowed"
|
||||||
|
)
|
||||||
|
except TypeError:
|
||||||
|
pass
|
||||||
|
|
||||||
|
if next_state is not None:
|
||||||
|
self._set_state(next_state)
|
||||||
|
|
||||||
|
return when or self.default_interval
|
||||||
|
|
||||||
|
def _set_state(self, state, when=None):
|
||||||
|
if hasattr(state, "id"):
|
||||||
|
state = state.id
|
||||||
|
if state not in self._states:
|
||||||
|
raise ValueError("{} is not a valid state".format(state))
|
||||||
|
self.state_id = state
|
||||||
|
if when is not None:
|
||||||
|
self.model.schedule.add(self, when=when)
|
||||||
|
return state
|
||||||
|
|
||||||
|
def die(self, *args, **kwargs):
|
||||||
|
return self.dead, super().die(*args, **kwargs)
|
||||||
|
|
||||||
|
@state
|
||||||
|
def dead(self):
|
||||||
|
return self.die()
|
100
soil/agents/network_agents.py
Normal file
@ -0,0 +1,100 @@
|
|||||||
|
from . import BaseAgent
|
||||||
|
|
||||||
|
|
||||||
|
class NetworkAgent(BaseAgent):
|
||||||
|
def __init__(self, *args, topology=None, init=True, node_id=None, **kwargs):
|
||||||
|
super().__init__(*args, init=False, **kwargs)
|
||||||
|
|
||||||
|
self.G = topology or self.model.G
|
||||||
|
assert self.G
|
||||||
|
if node_id is None:
|
||||||
|
nodes = self.random.choices(list(self.G.nodes), k=len(self.G))
|
||||||
|
for n_id in nodes:
|
||||||
|
if "agent" not in self.G.nodes[n_id] or self.G.nodes[n_id]["agent"] is None:
|
||||||
|
node_id = n_id
|
||||||
|
break
|
||||||
|
else:
|
||||||
|
node_id = len(self.G)
|
||||||
|
self.info(f"All nodes ({len(self.G)}) have an agent assigned, adding a new node to the graph for agent {self.unique_id}")
|
||||||
|
self.G.add_node(node_id)
|
||||||
|
assert node_id is not None
|
||||||
|
self.G.nodes[node_id]["agent"] = self
|
||||||
|
self.node_id = node_id
|
||||||
|
if init:
|
||||||
|
self.init()
|
||||||
|
|
||||||
|
def count_neighbors(self, state_id=None, **kwargs):
|
||||||
|
return len(self.get_neighbors(state_id=state_id, **kwargs))
|
||||||
|
if init:
|
||||||
|
self.init()
|
||||||
|
|
||||||
|
def iter_neighbors(self, **kwargs):
|
||||||
|
return self.iter_agents(limit_neighbors=True, **kwargs)
|
||||||
|
|
||||||
|
def get_neighbors(self, **kwargs):
|
||||||
|
return list(self.iter_neighbors(**kwargs))
|
||||||
|
|
||||||
|
@property
|
||||||
|
def node(self):
|
||||||
|
return self.G.nodes[self.node_id]
|
||||||
|
|
||||||
|
def iter_agents(self, unique_id=None, *, limit_neighbors=False, **kwargs):
|
||||||
|
unique_ids = None
|
||||||
|
if unique_ids is not None:
|
||||||
|
try:
|
||||||
|
unique_ids = set(unique_id)
|
||||||
|
except TypeError:
|
||||||
|
unique_ids = set([unique_id])
|
||||||
|
|
||||||
|
if limit_neighbors:
|
||||||
|
neighbor_ids = set()
|
||||||
|
for node_id in self.G.neighbors(self.node_id):
|
||||||
|
agent = self.G.nodes[node_id].get("agent")
|
||||||
|
if agent is not None:
|
||||||
|
neighbor_ids.add(agent.unique_id)
|
||||||
|
if unique_ids:
|
||||||
|
unique_ids = unique_ids & neighbor_ids
|
||||||
|
else:
|
||||||
|
unique_ids = neighbor_ids
|
||||||
|
if not unique_ids:
|
||||||
|
return
|
||||||
|
unique_ids = list(unique_ids)
|
||||||
|
yield from super().iter_agents(unique_id=unique_ids, **kwargs)
|
||||||
|
|
||||||
|
def subgraph(self, center=True, **kwargs):
|
||||||
|
include = [self] if center else []
|
||||||
|
G = self.G.subgraph(
|
||||||
|
n.node_id for n in list(self.get_agents(**kwargs) + include)
|
||||||
|
)
|
||||||
|
return G
|
||||||
|
|
||||||
|
def remove_node(self):
|
||||||
|
self.debug(f"Removing node for {self.unique_id}: {self.node_id}")
|
||||||
|
self.G.remove_node(self.node_id)
|
||||||
|
self.node_id = None
|
||||||
|
|
||||||
|
def add_edge(self, other, edge_attr_dict=None, *edge_attrs):
|
||||||
|
if self.node_id not in self.G.nodes(data=False):
|
||||||
|
raise ValueError(
|
||||||
|
"{} not in list of existing agents in the network".format(
|
||||||
|
self.unique_id
|
||||||
|
)
|
||||||
|
)
|
||||||
|
if other.node_id not in self.G.nodes(data=False):
|
||||||
|
raise ValueError(
|
||||||
|
"{} not in list of existing agents in the network".format(other)
|
||||||
|
)
|
||||||
|
|
||||||
|
self.G.add_edge(
|
||||||
|
self.node_id, other.node_id, edge_attr_dict=edge_attr_dict, *edge_attrs
|
||||||
|
)
|
||||||
|
|
||||||
|
def die(self, remove=True):
|
||||||
|
if not self.alive:
|
||||||
|
return None
|
||||||
|
if remove:
|
||||||
|
self.remove_node()
|
||||||
|
return super().die()
|
||||||
|
|
||||||
|
|
||||||
|
NetAgent = NetworkAgent
|
243
soil/analysis.py
@ -1,206 +1,49 @@
|
|||||||
|
import os
|
||||||
|
import sqlalchemy
|
||||||
import pandas as pd
|
import pandas as pd
|
||||||
|
from collections import namedtuple
|
||||||
|
|
||||||
import glob
|
def plot(env, agent_df=None, model_df=None, steps=False, ignore=["agent_count", ]):
|
||||||
import yaml
|
"""Plot the model dataframe and agent dataframe together."""
|
||||||
from os.path import join
|
if agent_df is None:
|
||||||
|
agent_df = env.agent_df()
|
||||||
from . import serialization
|
if model_df is None:
|
||||||
from tsih import History
|
model_df = env.model_df()
|
||||||
|
ignore = list(ignore)
|
||||||
|
if not steps:
|
||||||
def read_data(*args, group=False, **kwargs):
|
ignore.append("step")
|
||||||
iterable = _read_data(*args, **kwargs)
|
|
||||||
if group:
|
|
||||||
return group_trials(iterable)
|
|
||||||
else:
|
else:
|
||||||
return list(iterable)
|
ignore.append("time")
|
||||||
|
|
||||||
|
ax = model_df.drop(ignore, axis='columns').plot();
|
||||||
|
if not agent_df.empty:
|
||||||
|
agent_df.unstack().apply(lambda x: x.value_counts(),
|
||||||
|
axis=1).fillna(0).plot(ax=ax, secondary_y=True);
|
||||||
|
|
||||||
def _read_data(pattern, *args, from_csv=False, process_args=None, **kwargs):
|
Results = namedtuple("Results", ["config", "parameters", "env", "agents"])
|
||||||
if not process_args:
|
#TODO implement reading from CSV and SQLITE
|
||||||
process_args = {}
|
def read_sql(fpath=None, name=None, include_agents=False):
|
||||||
for folder in glob.glob(pattern):
|
if not (fpath is None) ^ (name is None):
|
||||||
config_file = glob.glob(join(folder, '*.yml'))[0]
|
raise ValueError("Specify either a path or a simulation name")
|
||||||
config = yaml.load(open(config_file), Loader=yaml.SafeLoader)
|
if name:
|
||||||
df = None
|
fpath = os.path.join("soil_output", name, f"{name}.sqlite")
|
||||||
if from_csv:
|
fpath = os.path.abspath(fpath)
|
||||||
for trial_data in sorted(glob.glob(join(folder,
|
# TODO: improve url parsing. This is a hacky way to check we weren't given a URL
|
||||||
'*.environment.csv'))):
|
if "://" not in fpath:
|
||||||
df = read_csv(trial_data, **kwargs)
|
fpath = f"sqlite:///{fpath}"
|
||||||
yield config_file, df, config
|
engine = sqlalchemy.create_engine(fpath)
|
||||||
else:
|
with engine.connect() as conn:
|
||||||
for trial_data in sorted(glob.glob(join(folder, '*.sqlite'))):
|
env = pd.read_sql_table("env", con=conn,
|
||||||
df = read_sql(trial_data, **kwargs)
|
index_col="step").reset_index().set_index([
|
||||||
yield config_file, df, config
|
"simulation_id", "params_id",
|
||||||
|
"iteration_id", "step"
|
||||||
|
])
|
||||||
|
agents = pd.read_sql_table("agents", con=conn, index_col=["simulation_id", "params_id", "iteration_id", "step", "agent_id"])
|
||||||
|
config = pd.read_sql_table("configuration", con=conn, index_col="simulation_id")
|
||||||
|
parameters = pd.read_sql_table("parameters", con=conn, index_col=["iteration_id", "params_id", "simulation_id"])
|
||||||
|
try:
|
||||||
|
parameters = parameters.pivot(columns="key", values="value")
|
||||||
|
except Exception as e:
|
||||||
|
print(f"warning: coult not pivot parameters: {e}")
|
||||||
|
|
||||||
|
return Results(config, parameters, env, agents)
|
||||||
def read_sql(db, *args, **kwargs):
|
|
||||||
h = History(db_path=db, backup=False, readonly=True)
|
|
||||||
df = h.read_sql(*args, **kwargs)
|
|
||||||
return df
|
|
||||||
|
|
||||||
|
|
||||||
def read_csv(filename, keys=None, convert_types=False, **kwargs):
|
|
||||||
'''
|
|
||||||
Read a CSV in canonical form: ::
|
|
||||||
|
|
||||||
<agent_id, t_step, key, value, value_type>
|
|
||||||
|
|
||||||
'''
|
|
||||||
df = pd.read_csv(filename)
|
|
||||||
if convert_types:
|
|
||||||
df = convert_types_slow(df)
|
|
||||||
if keys:
|
|
||||||
df = df[df['key'].isin(keys)]
|
|
||||||
df = process_one(df)
|
|
||||||
return df
|
|
||||||
|
|
||||||
|
|
||||||
def convert_row(row):
|
|
||||||
row['value'] = serialization.deserialize(row['value_type'], row['value'])
|
|
||||||
return row
|
|
||||||
|
|
||||||
|
|
||||||
def convert_types_slow(df):
|
|
||||||
'''
|
|
||||||
Go over every column in a dataframe and convert it to the type determined by the `get_types`
|
|
||||||
function.
|
|
||||||
|
|
||||||
This is a slow operation.
|
|
||||||
'''
|
|
||||||
dtypes = get_types(df)
|
|
||||||
for k, v in dtypes.items():
|
|
||||||
t = df[df['key']==k]
|
|
||||||
t['value'] = t['value'].astype(v)
|
|
||||||
df = df.apply(convert_row, axis=1)
|
|
||||||
return df
|
|
||||||
|
|
||||||
|
|
||||||
def split_processed(df):
|
|
||||||
env = df.loc[:, df.columns.get_level_values(1).isin(['env', 'stats'])]
|
|
||||||
agents = df.loc[:, ~df.columns.get_level_values(1).isin(['env', 'stats'])]
|
|
||||||
return env, agents
|
|
||||||
|
|
||||||
|
|
||||||
def split_df(df):
|
|
||||||
'''
|
|
||||||
Split a dataframe in two dataframes: one with the history of agents,
|
|
||||||
and one with the environment history
|
|
||||||
'''
|
|
||||||
envmask = (df['agent_id'] == 'env')
|
|
||||||
n_env = envmask.sum()
|
|
||||||
if n_env == len(df):
|
|
||||||
return df, None
|
|
||||||
elif n_env == 0:
|
|
||||||
return None, df
|
|
||||||
agents, env = [x for _, x in df.groupby(envmask)]
|
|
||||||
return env, agents
|
|
||||||
|
|
||||||
|
|
||||||
def process(df, **kwargs):
|
|
||||||
'''
|
|
||||||
Process a dataframe in canonical form ``(t_step, agent_id, key, value, value_type)`` into
|
|
||||||
two dataframes with a column per key: one with the history of the agents, and one for the
|
|
||||||
history of the environment.
|
|
||||||
'''
|
|
||||||
env, agents = split_df(df)
|
|
||||||
return process_one(env, **kwargs), process_one(agents, **kwargs)
|
|
||||||
|
|
||||||
|
|
||||||
def get_types(df):
|
|
||||||
'''
|
|
||||||
Get the value type for every key stored in a raw history dataframe.
|
|
||||||
'''
|
|
||||||
dtypes = df.groupby(by=['key'])['value_type'].unique()
|
|
||||||
return {k:v[0] for k,v in dtypes.items()}
|
|
||||||
|
|
||||||
|
|
||||||
def process_one(df, *keys, columns=['key', 'agent_id'], values='value',
|
|
||||||
fill=True, index=['t_step',],
|
|
||||||
aggfunc='first', **kwargs):
|
|
||||||
'''
|
|
||||||
Process a dataframe in canonical form ``(t_step, agent_id, key, value, value_type)`` into
|
|
||||||
a dataframe with a column per key
|
|
||||||
'''
|
|
||||||
if df is None:
|
|
||||||
return df
|
|
||||||
if keys:
|
|
||||||
df = df[df['key'].isin(keys)]
|
|
||||||
|
|
||||||
df = df.pivot_table(values=values, index=index, columns=columns,
|
|
||||||
aggfunc=aggfunc, **kwargs)
|
|
||||||
if fill:
|
|
||||||
df = fillna(df)
|
|
||||||
return df
|
|
||||||
|
|
||||||
|
|
||||||
def get_count(df, *keys):
|
|
||||||
'''
|
|
||||||
For every t_step and key, get the value count.
|
|
||||||
|
|
||||||
The result is a dataframe with `t_step` as index, an a multiindex column based on `key` and the values found for each `key`.
|
|
||||||
'''
|
|
||||||
if keys:
|
|
||||||
df = df[list(keys)]
|
|
||||||
df.columns = df.columns.remove_unused_levels()
|
|
||||||
counts = pd.DataFrame()
|
|
||||||
for key in df.columns.levels[0]:
|
|
||||||
g = df[[key]].apply(pd.Series.value_counts, axis=1).fillna(0)
|
|
||||||
for value, series in g.items():
|
|
||||||
counts[key, value] = series
|
|
||||||
counts.columns = pd.MultiIndex.from_tuples(counts.columns)
|
|
||||||
return counts
|
|
||||||
|
|
||||||
|
|
||||||
def get_majority(df, *keys):
|
|
||||||
'''
|
|
||||||
For every t_step and key, get the value of the majority of agents
|
|
||||||
|
|
||||||
The result is a dataframe with `t_step` as index, and columns based on `key`.
|
|
||||||
'''
|
|
||||||
df = get_count(df, *keys)
|
|
||||||
return df.stack(level=0).idxmax(axis=1).unstack()
|
|
||||||
|
|
||||||
|
|
||||||
def get_value(df, *keys, aggfunc='sum'):
|
|
||||||
'''
|
|
||||||
For every t_step and key, get the value of *numeric columns*, aggregated using a specific function.
|
|
||||||
'''
|
|
||||||
if keys:
|
|
||||||
df = df[list(keys)]
|
|
||||||
df.columns = df.columns.remove_unused_levels()
|
|
||||||
df = df.select_dtypes('number')
|
|
||||||
return df.groupby(level='key', axis=1).agg(aggfunc)
|
|
||||||
|
|
||||||
|
|
||||||
def plot_all(*args, plot_args={}, **kwargs):
|
|
||||||
'''
|
|
||||||
Read all the trial data and plot the result of applying a function on them.
|
|
||||||
'''
|
|
||||||
dfs = do_all(*args, **kwargs)
|
|
||||||
ps = []
|
|
||||||
for line in dfs:
|
|
||||||
f, df, config = line
|
|
||||||
if len(df) < 1:
|
|
||||||
continue
|
|
||||||
df.plot(title=config['name'], **plot_args)
|
|
||||||
ps.append(df)
|
|
||||||
return ps
|
|
||||||
|
|
||||||
def do_all(pattern, func, *keys, include_env=False, **kwargs):
|
|
||||||
for config_file, df, config in read_data(pattern, keys=keys):
|
|
||||||
if len(df) < 1:
|
|
||||||
continue
|
|
||||||
p = func(df, *keys, **kwargs)
|
|
||||||
yield config_file, p, config
|
|
||||||
|
|
||||||
|
|
||||||
def group_trials(trials, aggfunc=['mean', 'min', 'max', 'std']):
|
|
||||||
trials = list(trials)
|
|
||||||
trials = list(map(lambda x: x[1] if isinstance(x, tuple) else x, trials))
|
|
||||||
return pd.concat(trials).groupby(level=0).agg(aggfunc).reorder_levels([2, 0,1] ,axis=1)
|
|
||||||
|
|
||||||
|
|
||||||
def fillna(df):
|
|
||||||
new_df = df.ffill(axis=0)
|
|
||||||
return new_df
|
|
||||||
|
2
soil/config.py
Normal file
@ -0,0 +1,2 @@
|
|||||||
|
def load_config(cfg):
|
||||||
|
return cfg
|
@ -1,26 +1,19 @@
|
|||||||
from mesa import DataCollector as MDC
|
from mesa import DataCollector as MDC
|
||||||
|
|
||||||
class SoilDataCollector(MDC):
|
|
||||||
|
|
||||||
|
class SoilCollector(MDC):
|
||||||
|
def __init__(self, model_reporters=None, agent_reporters=None, tables=None, **kwargs):
|
||||||
|
model_reporters = model_reporters or {}
|
||||||
|
agent_reporters = agent_reporters or {}
|
||||||
|
tables = tables or {}
|
||||||
|
if 'agent_count' not in model_reporters:
|
||||||
|
model_reporters['agent_count'] = lambda m: m.schedule.get_agent_count()
|
||||||
|
if 'time' not in model_reporters:
|
||||||
|
model_reporters['time'] = lambda m: m.now
|
||||||
|
# if 'state_id' not in agent_reporters:
|
||||||
|
# agent_reporters['state_id'] = lambda agent: getattr(agent, 'state_id', None)
|
||||||
|
|
||||||
def __init__(self, environment, *args, **kwargs):
|
super().__init__(model_reporters=model_reporters,
|
||||||
super().__init__(*args, **kwargs)
|
agent_reporters=agent_reporters,
|
||||||
# Populate model and env reporters so they have a key per
|
tables=tables,
|
||||||
# So they can be shown in the web interface
|
**kwargs)
|
||||||
self.environment = environment
|
|
||||||
|
|
||||||
|
|
||||||
@property
|
|
||||||
def model_vars(self):
|
|
||||||
pass
|
|
||||||
|
|
||||||
@model_vars.setter
|
|
||||||
def model_vars(self, value):
|
|
||||||
pass
|
|
||||||
|
|
||||||
@property
|
|
||||||
def agent_reporters(self):
|
|
||||||
self.model._history._
|
|
||||||
|
|
||||||
pass
|
|
||||||
|
|
||||||
|
243
soil/debugging.py
Normal file
@ -0,0 +1,243 @@
|
|||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
import pdb
|
||||||
|
import sys
|
||||||
|
import os
|
||||||
|
|
||||||
|
from textwrap import indent
|
||||||
|
from functools import wraps
|
||||||
|
|
||||||
|
from .agents import FSM, MetaFSM
|
||||||
|
from mesa import Model, Agent
|
||||||
|
|
||||||
|
|
||||||
|
def wrapcmd(func):
|
||||||
|
@wraps(func)
|
||||||
|
def wrapper(self, arg: str, temporary=False):
|
||||||
|
sys.settrace(self.trace_dispatch)
|
||||||
|
|
||||||
|
lastself = self
|
||||||
|
known = globals()
|
||||||
|
known.update(self.curframe.f_globals)
|
||||||
|
known.update(self.curframe.f_locals)
|
||||||
|
known["attrs"] = arg.strip().split()
|
||||||
|
|
||||||
|
this = known.get("self", None)
|
||||||
|
|
||||||
|
if isinstance(this, Model):
|
||||||
|
known["model"] = this
|
||||||
|
elif isinstance(this, Agent):
|
||||||
|
known["agent"] = this
|
||||||
|
known["model"] = this.model
|
||||||
|
|
||||||
|
known["self"] = lastself
|
||||||
|
return exec(func.__code__, known, known)
|
||||||
|
|
||||||
|
return wrapper
|
||||||
|
|
||||||
|
|
||||||
|
class Debug(pdb.Pdb):
|
||||||
|
def __init__(self, *args, skip_soil=False, **kwargs):
|
||||||
|
skip = kwargs.get("skip", [])
|
||||||
|
if skip_soil:
|
||||||
|
skip.append("soil")
|
||||||
|
skip.append("contextlib")
|
||||||
|
skip.append("soil.*")
|
||||||
|
skip.append("mesa.*")
|
||||||
|
super(Debug, self).__init__(*args, skip=skip, **kwargs)
|
||||||
|
self.prompt = "[soil-pdb] "
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def _soil_agents(model, attrs=None, pretty=True, **kwargs):
|
||||||
|
for agent in model.agents(**kwargs):
|
||||||
|
d = agent
|
||||||
|
print(" - " + indent(agent.to_str(keys=attrs, pretty=pretty), " "))
|
||||||
|
|
||||||
|
@wrapcmd
|
||||||
|
def do_soil_agents():
|
||||||
|
return Debug._soil_agents(model, attrs=attrs or None)
|
||||||
|
|
||||||
|
do_sa = do_soil_agents
|
||||||
|
|
||||||
|
@wrapcmd
|
||||||
|
def do_soil_list():
|
||||||
|
return Debug._soil_agents(model, attrs=["state_id"], pretty=False)
|
||||||
|
|
||||||
|
do_sl = do_soil_list
|
||||||
|
|
||||||
|
def do_continue_state(self, arg):
|
||||||
|
"""Continue until next time this state is reached"""
|
||||||
|
self.do_break_state(arg, temporary=True)
|
||||||
|
return self.do_continue("")
|
||||||
|
|
||||||
|
do_cs = do_continue_state
|
||||||
|
|
||||||
|
@wrapcmd
|
||||||
|
def do_soil_agent():
|
||||||
|
if not agent:
|
||||||
|
print("No agent available")
|
||||||
|
return
|
||||||
|
|
||||||
|
keys = None
|
||||||
|
if attrs:
|
||||||
|
keys = []
|
||||||
|
for k in attrs:
|
||||||
|
for key in agent.keys():
|
||||||
|
if key.startswith(k):
|
||||||
|
keys.append(key)
|
||||||
|
|
||||||
|
print(agent.to_str(pretty=True, keys=keys))
|
||||||
|
|
||||||
|
do_aa = do_soil_agent
|
||||||
|
|
||||||
|
def do_break_step(self, arg: str):
|
||||||
|
"""
|
||||||
|
Break before the next step.
|
||||||
|
"""
|
||||||
|
try:
|
||||||
|
known = globals()
|
||||||
|
known.update(self.curframe.f_globals)
|
||||||
|
known.update(self.curframe.f_locals)
|
||||||
|
func = getattr(known["model"], "step")
|
||||||
|
except AttributeError as ex:
|
||||||
|
self.error(f"The model does not have a step function: {ex}")
|
||||||
|
return
|
||||||
|
if hasattr(func, "__func__"):
|
||||||
|
func = func.__func__
|
||||||
|
|
||||||
|
code = func.__code__
|
||||||
|
# use co_name to identify the bkpt (function names
|
||||||
|
# could be aliased, but co_name is invariant)
|
||||||
|
funcname = code.co_name
|
||||||
|
lineno = code.co_firstlineno
|
||||||
|
filename = code.co_filename
|
||||||
|
|
||||||
|
# Check for reasonable breakpoint
|
||||||
|
line = self.checkline(filename, lineno)
|
||||||
|
if not line:
|
||||||
|
raise ValueError("no line found")
|
||||||
|
# now set the break point
|
||||||
|
|
||||||
|
existing = self.get_breaks(filename, line)
|
||||||
|
if existing:
|
||||||
|
self.message("Breakpoint already exists at %s:%d" % (filename, line))
|
||||||
|
return
|
||||||
|
cond = f"self.schedule.steps > {model.schedule.steps}"
|
||||||
|
err = self.set_break(filename, line, True, cond, funcname)
|
||||||
|
if err:
|
||||||
|
self.error(err)
|
||||||
|
else:
|
||||||
|
bp = self.get_breaks(filename, line)[-1]
|
||||||
|
self.message("Breakpoint %d at %s:%d" % (bp.number, bp.file, bp.line))
|
||||||
|
return self.do_continue("")
|
||||||
|
|
||||||
|
do_bstep = do_break_step
|
||||||
|
|
||||||
|
def do_break_state(self, arg: str, instances=None, temporary=False):
|
||||||
|
"""
|
||||||
|
Break before a specified state is stepped into.
|
||||||
|
"""
|
||||||
|
|
||||||
|
klass = None
|
||||||
|
state = arg
|
||||||
|
if not state:
|
||||||
|
self.error("Specify at least a state name")
|
||||||
|
return
|
||||||
|
|
||||||
|
state, *tokens = state.lstrip().split()
|
||||||
|
if tokens:
|
||||||
|
instances = list(eval(token) for token in tokens)
|
||||||
|
|
||||||
|
colon = state.find(":")
|
||||||
|
|
||||||
|
if colon > 0:
|
||||||
|
klass = state[:colon].rstrip()
|
||||||
|
state = state[colon + 1 :].strip()
|
||||||
|
|
||||||
|
print(klass, state, tokens)
|
||||||
|
klass = eval(klass, self.curframe.f_globals, self.curframe_locals)
|
||||||
|
|
||||||
|
if klass:
|
||||||
|
klasses = [klass]
|
||||||
|
else:
|
||||||
|
klasses = [
|
||||||
|
k
|
||||||
|
for k in self.curframe.f_globals.values()
|
||||||
|
if isinstance(k, type) and issubclass(k, FSM)
|
||||||
|
]
|
||||||
|
|
||||||
|
if not klasses:
|
||||||
|
self.error("No agent classes found")
|
||||||
|
|
||||||
|
for klass in klasses:
|
||||||
|
try:
|
||||||
|
func = getattr(klass, state)
|
||||||
|
except AttributeError:
|
||||||
|
self.error(f"State {state} not found in class {klass}")
|
||||||
|
continue
|
||||||
|
if hasattr(func, "__func__"):
|
||||||
|
func = func.__func__
|
||||||
|
|
||||||
|
code = func.__code__
|
||||||
|
# use co_name to identify the bkpt (function names
|
||||||
|
# could be aliased, but co_name is invariant)
|
||||||
|
funcname = code.co_name
|
||||||
|
lineno = code.co_firstlineno
|
||||||
|
filename = code.co_filename
|
||||||
|
|
||||||
|
# Check for reasonable breakpoint
|
||||||
|
line = self.checkline(filename, lineno)
|
||||||
|
if not line:
|
||||||
|
raise ValueError("no line found")
|
||||||
|
# now set the break point
|
||||||
|
cond = None
|
||||||
|
if instances:
|
||||||
|
cond = f"self.unique_id in { repr(instances) }"
|
||||||
|
|
||||||
|
existing = self.get_breaks(filename, line)
|
||||||
|
if existing:
|
||||||
|
self.message("Breakpoint already exists at %s:%d" % (filename, line))
|
||||||
|
continue
|
||||||
|
err = self.set_break(filename, line, temporary, cond, funcname)
|
||||||
|
if err:
|
||||||
|
self.error(err)
|
||||||
|
else:
|
||||||
|
bp = self.get_breaks(filename, line)[-1]
|
||||||
|
self.message("Breakpoint %d at %s:%d" % (bp.number, bp.file, bp.line))
|
||||||
|
|
||||||
|
do_bs = do_break_state
|
||||||
|
|
||||||
|
def do_break_state_self(self, arg: str, temporary=False):
|
||||||
|
"""
|
||||||
|
Break before a specified state is stepped into, for the current agent
|
||||||
|
"""
|
||||||
|
agent = self.curframe.f_locals.get("self")
|
||||||
|
if not agent:
|
||||||
|
self.error("No current agent.")
|
||||||
|
self.error("Try this again when the debugger is stopped inside an agent")
|
||||||
|
return
|
||||||
|
|
||||||
|
arg = f"{agent.__class__.__name__}:{ arg } {agent.unique_id}"
|
||||||
|
return self.do_break_state(arg)
|
||||||
|
|
||||||
|
do_bss = do_break_state_self
|
||||||
|
|
||||||
|
|
||||||
|
debugger = None
|
||||||
|
|
||||||
|
|
||||||
|
def set_trace(frame=None, **kwargs):
|
||||||
|
global debugger
|
||||||
|
if debugger is None:
|
||||||
|
debugger = Debug(**kwargs)
|
||||||
|
frame = frame or sys._getframe().f_back
|
||||||
|
debugger.set_trace(frame)
|
||||||
|
|
||||||
|
|
||||||
|
def post_mortem(traceback=None, **kwargs):
|
||||||
|
global debugger
|
||||||
|
if debugger is None:
|
||||||
|
debugger = Debug(**kwargs)
|
||||||
|
t = sys.exc_info()[2]
|
||||||
|
debugger.reset()
|
||||||
|
debugger.interaction(None, t)
|
6
soil/decorators.py
Normal file
@ -0,0 +1,6 @@
|
|||||||
|
def report(f: property):
|
||||||
|
if isinstance(f, property):
|
||||||
|
setattr(f.fget, "add_to_report", True)
|
||||||
|
else:
|
||||||
|
setattr(f, "add_to_report", True)
|
||||||
|
return f
|
@ -1,208 +1,179 @@
|
|||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
import os
|
import os
|
||||||
import sqlite3
|
import sqlite3
|
||||||
import csv
|
|
||||||
import math
|
import math
|
||||||
import random
|
|
||||||
import yaml
|
|
||||||
import tempfile
|
|
||||||
import logging
|
import logging
|
||||||
import pandas as pd
|
import inspect
|
||||||
|
|
||||||
|
from typing import Any, Callable, Dict, Optional, Union, List, Type
|
||||||
|
from collections import namedtuple
|
||||||
from time import time as current_time
|
from time import time as current_time
|
||||||
from copy import deepcopy
|
from copy import deepcopy
|
||||||
from networkx.readwrite import json_graph
|
|
||||||
|
|
||||||
import networkx as nx
|
import networkx as nx
|
||||||
|
|
||||||
from tsih import History, Record, Key, NoHistory
|
from mesa import Model, Agent
|
||||||
|
|
||||||
from mesa import Model
|
from . import agents as agentmod, datacollection, serialization, utils, time, network, events
|
||||||
|
|
||||||
from . import serialization, agents, analysis, utils, time
|
|
||||||
|
|
||||||
# These properties will be copied when pickling/unpickling the environment
|
# TODO: maybe add metaclass to read attributes of a model
|
||||||
_CONFIG_PROPS = [ 'name',
|
|
||||||
'states',
|
|
||||||
'default_state',
|
|
||||||
'interval',
|
|
||||||
]
|
|
||||||
|
|
||||||
class Environment(Model):
|
class BaseEnvironment(Model):
|
||||||
"""
|
"""
|
||||||
The environment is key in a simulation. It contains the network topology,
|
The environment is key in a simulation. It controls how agents interact,
|
||||||
a reference to network and environment agents, as well as the environment
|
and what information is available to them.
|
||||||
params, which are used as shared state between agents.
|
|
||||||
|
This is an opinionated version of `mesa.Model` class, which adds many
|
||||||
|
convenience methods and abstractions.
|
||||||
|
|
||||||
The environment parameters and the state of every agent can be accessed
|
The environment parameters and the state of every agent can be accessed
|
||||||
both by using the environment as a dictionary or with the environment's
|
both by using the environment as a dictionary and with the environment's
|
||||||
:meth:`soil.environment.Environment.get` method.
|
:meth:`soil.environment.Environment.get` method.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
def __init__(self, name=None,
|
collector_class = datacollection.SoilCollector
|
||||||
network_agents=None,
|
|
||||||
environment_agents=None,
|
|
||||||
states=None,
|
|
||||||
default_state=None,
|
|
||||||
interval=1,
|
|
||||||
network_params=None,
|
|
||||||
seed=None,
|
|
||||||
topology=None,
|
|
||||||
schedule=None,
|
|
||||||
initial_time=0,
|
|
||||||
environment_params=None,
|
|
||||||
history=True,
|
|
||||||
dir_path=None,
|
|
||||||
**kwargs):
|
|
||||||
|
|
||||||
|
def __new__(cls,
|
||||||
|
*args: Any,
|
||||||
|
seed="default",
|
||||||
|
dir_path=None,
|
||||||
|
collector_class: type = None,
|
||||||
|
agent_reporters: Optional[Any] = None,
|
||||||
|
model_reporters: Optional[Any] = None,
|
||||||
|
tables: Optional[Any] = None,
|
||||||
|
**kwargs: Any) -> Any:
|
||||||
|
"""Create a new model with a default seed value"""
|
||||||
|
self = super().__new__(cls, *args, seed=seed, **kwargs)
|
||||||
|
self.dir_path = dir_path or os.getcwd()
|
||||||
|
collector_class = collector_class or cls.collector_class
|
||||||
|
collector_class = serialization.deserialize(collector_class)
|
||||||
|
self.datacollector = collector_class(
|
||||||
|
model_reporters=model_reporters,
|
||||||
|
agent_reporters=agent_reporters,
|
||||||
|
tables=tables,
|
||||||
|
)
|
||||||
|
for k in dir(cls):
|
||||||
|
v = getattr(cls, k)
|
||||||
|
if isinstance(v, property):
|
||||||
|
v = v.fget
|
||||||
|
if getattr(v, "add_to_report", False):
|
||||||
|
self.add_model_reporter(k, v)
|
||||||
|
|
||||||
|
return self
|
||||||
|
|
||||||
|
def __init__(
|
||||||
|
self,
|
||||||
|
*,
|
||||||
|
id="unnamed_env",
|
||||||
|
seed="default",
|
||||||
|
dir_path=None,
|
||||||
|
schedule_class=time.TimedActivation,
|
||||||
|
interval=1,
|
||||||
|
logger = None,
|
||||||
|
agents: Optional[Dict] = None,
|
||||||
|
collector_class: type = datacollection.SoilCollector,
|
||||||
|
agent_reporters: Optional[Any] = None,
|
||||||
|
model_reporters: Optional[Any] = None,
|
||||||
|
tables: Optional[Any] = None,
|
||||||
|
init: bool = True,
|
||||||
|
**env_params,
|
||||||
|
):
|
||||||
|
|
||||||
super().__init__()
|
super().__init__()
|
||||||
|
|
||||||
self.schedule = schedule
|
|
||||||
if schedule is None:
|
|
||||||
self.schedule = time.TimedActivation()
|
|
||||||
|
|
||||||
self.name = name or 'UnnamedEnvironment'
|
self.current_id = -1
|
||||||
seed = seed or current_time()
|
|
||||||
random.seed(seed)
|
|
||||||
if isinstance(states, list):
|
|
||||||
states = dict(enumerate(states))
|
|
||||||
self.states = deepcopy(states) if states else {}
|
|
||||||
self.default_state = deepcopy(default_state) or {}
|
|
||||||
|
|
||||||
if topology is None:
|
self.id = id
|
||||||
network_params = network_params or {}
|
|
||||||
topology = serialization.load_network(network_params,
|
|
||||||
dir_path=dir_path)
|
|
||||||
if not topology:
|
|
||||||
topology = nx.Graph()
|
|
||||||
self.G = nx.Graph(topology)
|
|
||||||
|
|
||||||
|
if logger:
|
||||||
|
self.logger = logger
|
||||||
|
else:
|
||||||
|
self.logger = utils.logger.getChild(self.id)
|
||||||
|
|
||||||
self.environment_params = environment_params or {}
|
if schedule_class is None:
|
||||||
self.environment_params.update(kwargs)
|
schedule_class = time.TimedActivation
|
||||||
|
else:
|
||||||
|
schedule_class = serialization.deserialize(schedule_class)
|
||||||
|
|
||||||
self._env_agents = {}
|
|
||||||
self.interval = interval
|
self.interval = interval
|
||||||
if history:
|
self.schedule = schedule_class(self)
|
||||||
history = History
|
|
||||||
else:
|
|
||||||
history = NoHistory
|
|
||||||
self._history = history(name=self.name,
|
|
||||||
backup=True)
|
|
||||||
self['SEED'] = seed
|
|
||||||
|
|
||||||
if network_agents:
|
for (k, v) in env_params.items():
|
||||||
distro = agents.calculate_distribution(network_agents)
|
self[k] = v
|
||||||
self.network_agents = agents._convert_agent_types(distro)
|
|
||||||
else:
|
|
||||||
self.network_agents = []
|
|
||||||
|
|
||||||
environment_agents = environment_agents or []
|
if agents:
|
||||||
if environment_agents:
|
self.add_agents(**agents)
|
||||||
distro = agents.calculate_distribution(environment_agents)
|
if init:
|
||||||
environment_agents = agents._convert_agent_types(distro)
|
self.init()
|
||||||
self.environment_agents = environment_agents
|
self.datacollector.collect(self)
|
||||||
|
|
||||||
self.logger = utils.logger.getChild(self.name)
|
def init(self):
|
||||||
|
pass
|
||||||
|
|
||||||
|
@property
|
||||||
|
def agents(self):
|
||||||
|
return agentmod.AgentView(self.schedule._agents)
|
||||||
|
|
||||||
|
def agent(self, *args, **kwargs):
|
||||||
|
return agentmod.AgentView(self.schedule._agents).one(*args, **kwargs)
|
||||||
|
|
||||||
|
def count_agents(self, *args, **kwargs):
|
||||||
|
return sum(1 for i in self.agents(*args, **kwargs))
|
||||||
|
|
||||||
|
def agent_df(self, steps=False):
|
||||||
|
df = self.datacollector.get_agent_vars_dataframe()
|
||||||
|
if steps:
|
||||||
|
df.index.rename(["step", "agent_id"], inplace=True)
|
||||||
|
return df
|
||||||
|
model_df = self.datacollector.get_model_vars_dataframe()
|
||||||
|
df.index = df.index.set_levels(model_df.time, level=0).rename(["time", "agent_id"])
|
||||||
|
return df
|
||||||
|
|
||||||
|
def model_df(self, steps=False):
|
||||||
|
df = self.datacollector.get_model_vars_dataframe()
|
||||||
|
if steps:
|
||||||
|
return df
|
||||||
|
df.index.rename("step", inplace=True)
|
||||||
|
return df.reset_index().set_index("time")
|
||||||
|
|
||||||
@property
|
@property
|
||||||
def now(self):
|
def now(self):
|
||||||
if self.schedule:
|
if self.schedule:
|
||||||
return self.schedule.time
|
return self.schedule.time
|
||||||
raise Exception('The environment has not been scheduled, so it has no sense of time')
|
raise Exception(
|
||||||
|
"The environment has not been scheduled, so it has no sense of time"
|
||||||
@property
|
|
||||||
def agents(self):
|
|
||||||
yield from self.environment_agents
|
|
||||||
yield from self.network_agents
|
|
||||||
|
|
||||||
@property
|
|
||||||
def environment_agents(self):
|
|
||||||
for ref in self._env_agents.values():
|
|
||||||
yield ref
|
|
||||||
|
|
||||||
@environment_agents.setter
|
|
||||||
def environment_agents(self, environment_agents):
|
|
||||||
self._environment_agents = environment_agents
|
|
||||||
|
|
||||||
for (ix, agent) in enumerate(self._environment_agents):
|
|
||||||
self.init_agent(len(self.G) + ix, agent_definitions=environment_agents, with_node=False)
|
|
||||||
|
|
||||||
@property
|
|
||||||
def network_agents(self):
|
|
||||||
for i in self.G.nodes():
|
|
||||||
node = self.G.nodes[i]
|
|
||||||
if 'agent' in node:
|
|
||||||
yield node['agent']
|
|
||||||
|
|
||||||
@network_agents.setter
|
|
||||||
def network_agents(self, network_agents):
|
|
||||||
self._network_agents = network_agents
|
|
||||||
for ix in self.G.nodes():
|
|
||||||
self.init_agent(ix, agent_definitions=network_agents)
|
|
||||||
|
|
||||||
def init_agent(self, agent_id, agent_definitions, with_node=True):
|
|
||||||
init = False
|
|
||||||
|
|
||||||
state = {}
|
|
||||||
if with_node:
|
|
||||||
node = self.G.nodes[agent_id]
|
|
||||||
state = dict(node)
|
|
||||||
state.update(self.states.get(agent_id, {}))
|
|
||||||
|
|
||||||
agent_type = None
|
|
||||||
if 'agent_type' in state:
|
|
||||||
agent_type = state['agent_type']
|
|
||||||
elif with_node and 'agent_type' in node:
|
|
||||||
agent_type = node['agent_type']
|
|
||||||
elif 'agent_type' in self.default_state:
|
|
||||||
agent_type = self.default_state['agent_type']
|
|
||||||
|
|
||||||
if agent_type:
|
|
||||||
agent_type = agents.deserialize_type(agent_type)
|
|
||||||
elif agent_definitions:
|
|
||||||
agent_type, state = agents._agent_from_definition(agent_definitions, unique_id=agent_id)
|
|
||||||
else:
|
|
||||||
serialization.logger.debug('Skipping agent {}'.format(agent_id))
|
|
||||||
return
|
|
||||||
return self.set_agent(agent_id, agent_type, state, with_node=with_node)
|
|
||||||
|
|
||||||
def set_agent(self, agent_id, agent_type, state=None, with_node=True):
|
|
||||||
defstate = deepcopy(self.default_state) or {}
|
|
||||||
defstate.update(self.states.get(agent_id, {}))
|
|
||||||
if with_node:
|
|
||||||
node = self.G.nodes[agent_id]
|
|
||||||
defstate.update(node.get('state', {}))
|
|
||||||
if state:
|
|
||||||
defstate.update(state)
|
|
||||||
a = None
|
|
||||||
if agent_type:
|
|
||||||
state = defstate
|
|
||||||
a = agent_type(model=self,
|
|
||||||
unique_id=agent_id
|
|
||||||
)
|
)
|
||||||
|
def init_agents(self):
|
||||||
|
pass
|
||||||
|
|
||||||
for (k, v) in state.items():
|
def add_agent(self, agent_class, unique_id=None, **agent):
|
||||||
setattr(a, k, v)
|
if unique_id is None:
|
||||||
|
unique_id = self.next_id()
|
||||||
|
|
||||||
|
agent["unique_id"] = unique_id
|
||||||
|
|
||||||
|
agent = dict(**agent)
|
||||||
|
unique_id = agent.pop("unique_id", None)
|
||||||
|
if unique_id is None:
|
||||||
|
unique_id = self.next_id()
|
||||||
|
|
||||||
|
a = serialization.deserialize(agent_class)(unique_id=unique_id, model=self, **agent)
|
||||||
|
|
||||||
if with_node:
|
|
||||||
node['agent'] = a
|
|
||||||
self.schedule.add(a)
|
self.schedule.add(a)
|
||||||
return a
|
return a
|
||||||
|
|
||||||
def add_node(self, agent_type, state=None):
|
def add_agents(self, agent_classes: List[type], k, weights: Optional[List[float]] = None, **kwargs):
|
||||||
agent_id = int(len(self.G.nodes()))
|
if isinstance(agent_classes, type):
|
||||||
self.G.add_node(agent_id)
|
agent_classes = [agent_classes]
|
||||||
a = self.set_agent(agent_id, agent_type, state)
|
if weights is None:
|
||||||
a['visible'] = True
|
weights = [1] * len(agent_classes)
|
||||||
return a
|
|
||||||
|
|
||||||
def add_edge(self, agent1, agent2, start=None, **attrs):
|
for cls in self.random.choices(agent_classes, weights=weights, k=k):
|
||||||
if hasattr(agent1, 'id'):
|
self.add_agent(agent_class=cls, **kwargs)
|
||||||
agent1 = agent1.id
|
|
||||||
if hasattr(agent2, 'id'):
|
|
||||||
agent2 = agent2.id
|
|
||||||
start = start or self.now
|
|
||||||
return self.G.add_edge(agent1, agent2, **attrs)
|
|
||||||
|
|
||||||
def log(self, message, *args, level=logging.INFO, **kwargs):
|
def log(self, message, *args, level=logging.INFO, **kwargs):
|
||||||
if not self.logger.isEnabledFor(level):
|
if not self.logger.isEnabledFor(level):
|
||||||
@ -212,185 +183,248 @@ class Environment(Model):
|
|||||||
for k, v in kwargs:
|
for k, v in kwargs:
|
||||||
message += " {k}={v} ".format(k, v)
|
message += " {k}={v} ".format(k, v)
|
||||||
extra = {}
|
extra = {}
|
||||||
extra['now'] = self.now
|
extra["now"] = self.now
|
||||||
extra['unique_id'] = self.name
|
extra["id"] = self.id
|
||||||
return self.logger.log(level, message, extra=extra)
|
return self.logger.log(level, message, extra=extra)
|
||||||
|
|
||||||
def step(self):
|
def step(self):
|
||||||
|
"""
|
||||||
|
Advance one step in the simulation, and update the data collection and scheduler appropriately
|
||||||
|
"""
|
||||||
super().step()
|
super().step()
|
||||||
self.schedule.step()
|
self.schedule.step()
|
||||||
|
self.datacollector.collect(self)
|
||||||
|
|
||||||
def run(self, until, *args, **kwargs):
|
if self.logger.isEnabledFor(logging.DEBUG):
|
||||||
self._save_state()
|
msg = "Model data:\n"
|
||||||
|
max_width = max(len(k) for k in self.datacollector.model_vars.keys())
|
||||||
|
for (k, v) in self.datacollector.model_vars.items():
|
||||||
|
msg += f"\t{k:<{max_width}}: {v[-1]:>6}\n"
|
||||||
|
self.logger.debug(f"--- Steps: {self.schedule.steps:^5} - Time: {self.now:^5} --- " + msg)
|
||||||
|
|
||||||
while self.schedule.next_time < until:
|
def add_model_reporter(self, name, func=None):
|
||||||
self.step()
|
if not func:
|
||||||
utils.logger.debug(f'Simulation step {self.schedule.time}/{until}. Next: {self.schedule.next_time}')
|
func = lambda env: getattr(env, name)
|
||||||
self.schedule.time = until
|
self.datacollector._new_model_reporter(name, func)
|
||||||
self._history.flush_cache()
|
|
||||||
|
|
||||||
def _save_state(self, now=None):
|
def add_agent_reporter(self, name, agent_type=None):
|
||||||
serialization.logger.debug('Saving state @{}'.format(self.now))
|
if agent_type:
|
||||||
self._history.save_records(self.state_to_tuples(now=now))
|
reporter = lambda a: getattr(a, name) if isinstance(a, agent_type) else None
|
||||||
|
else:
|
||||||
|
reporter = lambda a: getattr(a, name, None)
|
||||||
|
self.datacollector._new_agent_reporter(name, reporter)
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def run(cls, *,
|
||||||
|
iterations=1,
|
||||||
|
num_processes=1, **kwargs):
|
||||||
|
from .simulation import Simulation
|
||||||
|
return Simulation(name=cls.__name__,
|
||||||
|
model=cls, iterations=iterations,
|
||||||
|
num_processes=num_processes, **kwargs).run()
|
||||||
|
|
||||||
def __getitem__(self, key):
|
def __getitem__(self, key):
|
||||||
if isinstance(key, tuple):
|
try:
|
||||||
self._history.flush_cache()
|
return getattr(self, key)
|
||||||
return self._history[key]
|
except AttributeError:
|
||||||
|
raise KeyError(f"key {key} not found in environment")
|
||||||
|
|
||||||
return self.environment_params[key]
|
def __delitem__(self, key):
|
||||||
|
return delattr(self, key)
|
||||||
def __setitem__(self, key, value):
|
|
||||||
if isinstance(key, tuple):
|
|
||||||
k = Key(*key)
|
|
||||||
self._history.save_record(*k,
|
|
||||||
value=value)
|
|
||||||
return
|
|
||||||
self.environment_params[key] = value
|
|
||||||
self._history.save_record(dict_id='env',
|
|
||||||
t_step=self.now,
|
|
||||||
key=key,
|
|
||||||
value=value)
|
|
||||||
|
|
||||||
def __contains__(self, key):
|
def __contains__(self, key):
|
||||||
return key in self.environment_params
|
return hasattr(self, key)
|
||||||
|
|
||||||
|
def __setitem__(self, key, value):
|
||||||
|
setattr(self, key, value)
|
||||||
|
|
||||||
|
def __str__(self):
|
||||||
|
return str(dict(self))
|
||||||
|
|
||||||
|
def __len__(self):
|
||||||
|
return sum(1 for n in self.keys())
|
||||||
|
|
||||||
|
def __iter__(self):
|
||||||
|
return iter(self.agents())
|
||||||
|
|
||||||
def get(self, key, default=None):
|
def get(self, key, default=None):
|
||||||
'''
|
|
||||||
Get the value of an environment attribute in a
|
|
||||||
given point in the simulation (history).
|
|
||||||
If key is an attribute name, this method returns
|
|
||||||
the current value.
|
|
||||||
To get values at other times, use a
|
|
||||||
:meth: `soil.history.Key` tuple.
|
|
||||||
'''
|
|
||||||
return self[key] if key in self else default
|
return self[key] if key in self else default
|
||||||
|
|
||||||
def get_agent(self, agent_id):
|
def keys(self):
|
||||||
return self.G.nodes[agent_id]['agent']
|
return (k for k in self.__dict__ if k[0] != "_")
|
||||||
|
|
||||||
def get_agents(self, nodes=None):
|
class NetworkEnvironment(BaseEnvironment):
|
||||||
if nodes is None:
|
"""
|
||||||
return self.agents
|
The NetworkEnvironment is an environment that includes one or more networkx.Graph intances
|
||||||
return (self.G.nodes[i]['agent'] for i in nodes)
|
and methods to associate agents to nodes and vice versa.
|
||||||
|
"""
|
||||||
|
|
||||||
def dump_csv(self, f):
|
def __init__(self,
|
||||||
with utils.open_or_reuse(f, 'w') as f:
|
*args,
|
||||||
cr = csv.writer(f)
|
topology: Optional[Union[nx.Graph, str]] = None,
|
||||||
cr.writerow(('agent_id', 't_step', 'key', 'value'))
|
agent_class: Optional[Type[agentmod.Agent]] = None,
|
||||||
for i in self.history_to_tuples():
|
network_generator: Optional[Callable] = None,
|
||||||
cr.writerow(i)
|
network_params: Optional[Dict] = {},
|
||||||
|
init=True,
|
||||||
def dump_gexf(self, f):
|
**kwargs):
|
||||||
G = self.history_to_graph()
|
self.topology = topology
|
||||||
# Workaround for geometric models
|
self.network_generator = network_generator
|
||||||
# See soil/soil#4
|
self.network_params = network_params
|
||||||
for node in G.nodes():
|
if topology or network_params or network_generator:
|
||||||
if 'pos' in G.nodes[node]:
|
self.create_network(topology, generator=network_generator, **network_params)
|
||||||
G.nodes[node]['viz'] = {"position": {"x": G.nodes[node]['pos'][0], "y": G.nodes[node]['pos'][1], "z": 0.0}}
|
|
||||||
del (G.nodes[node]['pos'])
|
|
||||||
|
|
||||||
nx.write_gexf(G, f, version="1.2draft")
|
|
||||||
|
|
||||||
def dump(self, *args, formats=None, **kwargs):
|
|
||||||
if not formats:
|
|
||||||
return
|
|
||||||
functions = {
|
|
||||||
'csv': self.dump_csv,
|
|
||||||
'gexf': self.dump_gexf
|
|
||||||
}
|
|
||||||
for f in formats:
|
|
||||||
if f in functions:
|
|
||||||
functions[f](*args, **kwargs)
|
|
||||||
else:
|
else:
|
||||||
raise ValueError('Unknown format: {}'.format(f))
|
self.G = nx.Graph()
|
||||||
|
super().__init__(*args, **kwargs, init=False)
|
||||||
|
|
||||||
def dump_sqlite(self, f):
|
self.agent_class = agent_class
|
||||||
return self._history.dump(f)
|
if agent_class:
|
||||||
|
self.agent_class = serialization.deserialize(agent_class)
|
||||||
|
if self.agent_class:
|
||||||
|
self.populate_network(self.agent_class)
|
||||||
|
self._check_agent_nodes()
|
||||||
|
if init:
|
||||||
|
self.init()
|
||||||
|
self.datacollector.collect(self)
|
||||||
|
|
||||||
def state_to_tuples(self, now=None):
|
def add_agent(self, agent_class, *args, node_id=None, topology=None, **kwargs):
|
||||||
if now is None:
|
if node_id is None and topology is None:
|
||||||
now = self.now
|
return super().add_agent(agent_class, *args, **kwargs)
|
||||||
for k, v in self.environment_params.items():
|
try:
|
||||||
yield Record(dict_id='env',
|
a = super().add_agent(agent_class, *args, node_id=node_id, **kwargs)
|
||||||
t_step=now,
|
except TypeError:
|
||||||
key=k,
|
self.logger.warning(f"Agent constructor for {agent_class} does not have a node_id attribute. Might be a bug.")
|
||||||
value=v)
|
a = super().add_agent(agent_class, *args, **kwargs)
|
||||||
for agent in self.agents:
|
self.G.nodes[node_id]["agent"] = a
|
||||||
for k, v in agent.state.items():
|
return a
|
||||||
yield Record(dict_id=agent.id,
|
|
||||||
t_step=now,
|
|
||||||
key=k,
|
|
||||||
value=v)
|
|
||||||
|
|
||||||
def history_to_tuples(self):
|
def add_agents(self, *args, k=None, **kwargs):
|
||||||
return self._history.to_tuples()
|
if not k and not self.G:
|
||||||
|
raise ValueError("Cannot add agents to an empty network")
|
||||||
|
super().add_agents(*args, k=k or len(self.G), **kwargs)
|
||||||
|
|
||||||
def history_to_graph(self):
|
def create_network(self, topology=None, generator=None, path=None, **network_params):
|
||||||
G = nx.Graph(self.G)
|
if topology is not None:
|
||||||
|
topology = network.from_topology(topology, dir_path=self.dir_path)
|
||||||
for agent in self.network_agents:
|
elif path is not None:
|
||||||
|
topology = network.from_topology(path, dir_path=self.dir_path)
|
||||||
attributes = {'agent': str(agent.__class__)}
|
elif generator is not None:
|
||||||
lastattributes = {}
|
topology = network.from_params(generator=generator, dir_path=self.dir_path, **network_params)
|
||||||
spells = []
|
|
||||||
lastvisible = False
|
|
||||||
laststep = None
|
|
||||||
history = self[agent.id, None, None]
|
|
||||||
if not history:
|
|
||||||
continue
|
|
||||||
for t_step, attribute, value in sorted(list(history)):
|
|
||||||
if attribute == 'visible':
|
|
||||||
nowvisible = value
|
|
||||||
if nowvisible and not lastvisible:
|
|
||||||
laststep = t_step
|
|
||||||
if not nowvisible and lastvisible:
|
|
||||||
spells.append((laststep, t_step))
|
|
||||||
|
|
||||||
lastvisible = nowvisible
|
|
||||||
continue
|
|
||||||
key = 'attr_' + attribute
|
|
||||||
if key not in attributes:
|
|
||||||
attributes[key] = list()
|
|
||||||
if key not in lastattributes:
|
|
||||||
lastattributes[key] = (value, t_step)
|
|
||||||
elif lastattributes[key][0] != value:
|
|
||||||
last_value, laststep = lastattributes[key]
|
|
||||||
commit_value = (last_value, laststep, t_step)
|
|
||||||
if key not in attributes:
|
|
||||||
attributes[key] = list()
|
|
||||||
attributes[key].append(commit_value)
|
|
||||||
lastattributes[key] = (value, t_step)
|
|
||||||
for k, v in lastattributes.items():
|
|
||||||
attributes[k].append((v[0], v[1], None))
|
|
||||||
if lastvisible:
|
|
||||||
spells.append((laststep, None))
|
|
||||||
if spells:
|
|
||||||
G.add_node(agent.id, spells=spells, **attributes)
|
|
||||||
else:
|
else:
|
||||||
G.add_node(agent.id, **attributes)
|
raise ValueError("topology must be a networkx.Graph or a string, or network_generator must be provided")
|
||||||
|
self.G = topology
|
||||||
|
|
||||||
return G
|
def init_agents(self, *args, **kwargs):
|
||||||
|
"""Initialize the agents from a"""
|
||||||
|
super().init_agents(*args, **kwargs)
|
||||||
|
|
||||||
def __getstate__(self):
|
@property
|
||||||
state = {}
|
def network_agents(self):
|
||||||
for prop in _CONFIG_PROPS:
|
"""Return agents still alive and assigned to a node in the network."""
|
||||||
state[prop] = self.__dict__[prop]
|
for (id, data) in self.G.nodes(data=True):
|
||||||
state['G'] = json_graph.node_link_data(self.G)
|
if "agent" in data:
|
||||||
state['environment_agents'] = self._env_agents
|
agent = data["agent"]
|
||||||
state['history'] = self._history
|
if getattr(agent, "alive", True):
|
||||||
state['schedule'] = self.schedule
|
yield agent
|
||||||
return state
|
|
||||||
|
|
||||||
def __setstate__(self, state):
|
def add_node(self, agent_class, unique_id=None, node_id=None, **kwargs):
|
||||||
for prop in _CONFIG_PROPS:
|
if unique_id is None:
|
||||||
self.__dict__[prop] = state[prop]
|
unique_id = self.next_id()
|
||||||
self._env_agents = state['environment_agents']
|
if node_id is None:
|
||||||
self.G = json_graph.node_link_graph(state['G'])
|
node_id = network.find_unassigned(
|
||||||
self._history = state['history']
|
G=self.G, shuffle=True, random=self.random
|
||||||
# self._env = None
|
)
|
||||||
self.schedule = state['schedule']
|
if node_id is None:
|
||||||
self._queue = []
|
node_id = f"node_for_{unique_id}"
|
||||||
|
|
||||||
|
if node_id not in self.G.nodes:
|
||||||
|
self.G.add_node(node_id)
|
||||||
|
|
||||||
|
assert "agent" not in self.G.nodes[node_id]
|
||||||
|
|
||||||
|
a = self.add_agent(
|
||||||
|
unique_id=unique_id,
|
||||||
|
agent_class=agent_class,
|
||||||
|
topology=self.G,
|
||||||
|
node_id=node_id,
|
||||||
|
**kwargs,
|
||||||
|
)
|
||||||
|
a["visible"] = True
|
||||||
|
return a
|
||||||
|
|
||||||
|
def _check_agent_nodes(self):
|
||||||
|
"""
|
||||||
|
Detect nodes that have agents assigned to them.
|
||||||
|
"""
|
||||||
|
for (id, data) in self.G.nodes(data=True):
|
||||||
|
if "agent_id" in data:
|
||||||
|
agent = self.agents(data["agent_id"])
|
||||||
|
self.G.nodes[id]["agent"] = agent
|
||||||
|
assert not getattr(agent, "node_id", None) or agent.node_id == id
|
||||||
|
agent.node_id = id
|
||||||
|
for agent in self.agents():
|
||||||
|
if hasattr(agent, "node_id"):
|
||||||
|
node_id = agent["node_id"]
|
||||||
|
if node_id not in self.G.nodes:
|
||||||
|
raise ValueError(f"Agent {agent} is assigned to node {agent.node_id} which is not in the network")
|
||||||
|
node = self.G.nodes[node_id]
|
||||||
|
if node.get("agent") is not None and node["agent"] != agent:
|
||||||
|
raise ValueError(f"Node {node_id} already has a different agent assigned to it")
|
||||||
|
self.G.nodes[node_id]["agent"] = agent
|
||||||
|
|
||||||
|
def add_agents(self, agent_classes: List[type], k=None, weights: Optional[List[float]] = None, **kwargs):
|
||||||
|
if k is None:
|
||||||
|
k = len(self.G)
|
||||||
|
if not k:
|
||||||
|
raise ValueError("Cannot add agents to an empty network")
|
||||||
|
super().add_agents(agent_classes, k=k, weights=weights, **kwargs)
|
||||||
|
|
||||||
|
def agent_for_node_id(self, node_id):
|
||||||
|
return self.G.nodes[node_id].get("agent")
|
||||||
|
|
||||||
|
def populate_network(self, agent_class: List[Model], weights: List[float] = None, **agent_params):
|
||||||
|
if isinstance(agent_class, type):
|
||||||
|
agent_class = [agent_class]
|
||||||
|
else:
|
||||||
|
agent_class = list(agent_class)
|
||||||
|
if not weights:
|
||||||
|
weights = [1] * len(agent_class)
|
||||||
|
assert len(self.G)
|
||||||
|
classes = self.random.choices(agent_class, weights, k=len(self.G))
|
||||||
|
toadd = []
|
||||||
|
for (cls, (node_id, node)) in zip(classes, self.G.nodes(data=True)):
|
||||||
|
if "agent" in node:
|
||||||
|
continue
|
||||||
|
node["agent"] = None # Reserve
|
||||||
|
toadd.append(dict(node_id=node_id, topology=self.G, agent_class=cls, **agent_params))
|
||||||
|
for d in toadd:
|
||||||
|
a = self.add_agent(**d)
|
||||||
|
self.G.nodes[d["node_id"]]["agent"] = a
|
||||||
|
assert all("agent" in node for (_, node) in self.G.nodes(data=True))
|
||||||
|
assert len(list(self.network_agents))
|
||||||
|
|
||||||
|
|
||||||
SoilEnvironment = Environment
|
class EventedEnvironment(BaseEnvironment):
|
||||||
|
def broadcast(self, msg, sender=None, expiration=None, ttl=None, **kwargs):
|
||||||
|
for agent in self.agents(**kwargs):
|
||||||
|
if agent == sender:
|
||||||
|
continue
|
||||||
|
self.logger.debug(f"Telling {repr(agent)}: {msg} ttl={ttl}")
|
||||||
|
try:
|
||||||
|
inbox = agent._inbox
|
||||||
|
except AttributeError:
|
||||||
|
self.logger.info(
|
||||||
|
f"Agent {agent.unique_id} cannot receive events because it does not have an inbox"
|
||||||
|
)
|
||||||
|
continue
|
||||||
|
# Allow for AttributeError exceptions in this part of the code
|
||||||
|
inbox.append(
|
||||||
|
events.Tell(
|
||||||
|
payload=msg,
|
||||||
|
sender=sender,
|
||||||
|
expiration=expiration if ttl is None else self.now + ttl,
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
class Environment(NetworkEnvironment, EventedEnvironment):
|
||||||
|
"""Default environment class, has both network and event capabilities"""
|
||||||
|
56
soil/events.py
Normal file
@ -0,0 +1,56 @@
|
|||||||
|
from .time import BaseCond
|
||||||
|
from dataclasses import dataclass, field
|
||||||
|
from typing import Any
|
||||||
|
from uuid import uuid4
|
||||||
|
|
||||||
|
|
||||||
|
class Event:
|
||||||
|
pass
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class Message:
|
||||||
|
payload: Any
|
||||||
|
sender: Any = None
|
||||||
|
expiration: float = None
|
||||||
|
timestamp: float = None
|
||||||
|
id: int = field(default_factory=uuid4)
|
||||||
|
|
||||||
|
def expired(self, when):
|
||||||
|
return self.expiration is not None and self.expiration < when
|
||||||
|
|
||||||
|
|
||||||
|
class Reply(Message):
|
||||||
|
source: Message
|
||||||
|
|
||||||
|
|
||||||
|
class ReplyCond(BaseCond):
|
||||||
|
def __init__(self, ask, *args, **kwargs):
|
||||||
|
self._ask = ask
|
||||||
|
super().__init__(*args, **kwargs)
|
||||||
|
|
||||||
|
def ready(self, agent, time):
|
||||||
|
return self._ask.reply is not None or self._ask.expired(time)
|
||||||
|
|
||||||
|
def return_value(self, agent):
|
||||||
|
if self._ask.expired(agent.now):
|
||||||
|
raise TimedOut()
|
||||||
|
return self._ask.reply
|
||||||
|
|
||||||
|
def __repr__(self):
|
||||||
|
return f"ReplyCond({self._ask.id})"
|
||||||
|
|
||||||
|
|
||||||
|
class Ask(Message):
|
||||||
|
reply: Message = None
|
||||||
|
|
||||||
|
def replied(self, expiration=None):
|
||||||
|
return ReplyCond(self)
|
||||||
|
|
||||||
|
|
||||||
|
class Tell(Message):
|
||||||
|
pass
|
||||||
|
|
||||||
|
|
||||||
|
class TimedOut(Exception):
|
||||||
|
pass
|
@ -1,17 +1,21 @@
|
|||||||
import os
|
import os
|
||||||
import csv as csvlib
|
import sys
|
||||||
import time
|
from time import time as current_time
|
||||||
from io import BytesIO
|
from io import BytesIO
|
||||||
|
from sqlalchemy import create_engine
|
||||||
|
from textwrap import dedent, indent
|
||||||
|
|
||||||
|
|
||||||
import matplotlib.pyplot as plt
|
import matplotlib.pyplot as plt
|
||||||
import networkx as nx
|
import networkx as nx
|
||||||
|
import pandas as pd
|
||||||
|
|
||||||
|
|
||||||
from .serialization import deserialize
|
from .serialization import deserialize, serialize
|
||||||
from .utils import open_or_reuse, logger, timer
|
from .utils import try_backup, open_or_reuse, logger, timer
|
||||||
|
|
||||||
|
|
||||||
from . import utils
|
from . import utils, network
|
||||||
|
|
||||||
|
|
||||||
class DryRunner(BytesIO):
|
class DryRunner(BytesIO):
|
||||||
@ -22,51 +26,59 @@ class DryRunner(BytesIO):
|
|||||||
|
|
||||||
def write(self, txt):
|
def write(self, txt):
|
||||||
if self.__copy_to:
|
if self.__copy_to:
|
||||||
self.__copy_to.write('{}:::{}'.format(self.__fname, txt))
|
self.__copy_to.write("{}:::{}".format(self.__fname, txt))
|
||||||
try:
|
try:
|
||||||
super().write(txt)
|
super().write(txt)
|
||||||
except TypeError:
|
except TypeError:
|
||||||
super().write(bytes(txt, 'utf-8'))
|
super().write(bytes(txt, "utf-8"))
|
||||||
|
|
||||||
def close(self):
|
def close(self):
|
||||||
content = '(binary data not shown)'
|
content = "(binary data not shown)"
|
||||||
try:
|
try:
|
||||||
content = self.getvalue().decode()
|
content = self.getvalue().decode()
|
||||||
except UnicodeDecodeError:
|
except UnicodeDecodeError:
|
||||||
pass
|
pass
|
||||||
logger.info('**Not** written to {} (dry run mode):\n\n{}\n\n'.format(self.__fname, content))
|
logger.info(
|
||||||
|
"**Not** written to {} (no_dump mode):\n\n{}\n\n".format(
|
||||||
|
self.__fname, content
|
||||||
|
)
|
||||||
|
)
|
||||||
super().close()
|
super().close()
|
||||||
|
|
||||||
|
|
||||||
class Exporter:
|
class Exporter:
|
||||||
'''
|
"""
|
||||||
Interface for all exporters. It is not necessary, but it is useful
|
Interface for all exporters. It is not necessary, but it is useful
|
||||||
if you don't plan to implement all the methods.
|
if you don't plan to implement all the methods.
|
||||||
'''
|
"""
|
||||||
|
|
||||||
def __init__(self, simulation, outdir=None, dry_run=None, copy_to=None):
|
def __init__(self, simulation, outdir=None, dump=True, copy_to=None):
|
||||||
self.simulation = simulation
|
self.simulation = simulation
|
||||||
outdir = outdir or os.path.join(os.getcwd(), 'soil_output')
|
outdir = outdir or os.path.join(os.getcwd(), "soil_output")
|
||||||
self.outdir = os.path.join(outdir,
|
self.outdir = os.path.join(outdir, simulation.group or "", simulation.name)
|
||||||
simulation.group or '',
|
self.dump = dump
|
||||||
simulation.name)
|
if copy_to is None and not dump:
|
||||||
self.dry_run = dry_run
|
copy_to = sys.stdout
|
||||||
self.copy_to = copy_to
|
self.copy_to = copy_to
|
||||||
|
|
||||||
def start(self):
|
def sim_start(self):
|
||||||
'''Method to call when the simulation starts'''
|
"""Method to call when the simulation starts"""
|
||||||
pass
|
pass
|
||||||
|
|
||||||
def end(self, stats):
|
def sim_end(self):
|
||||||
'''Method to call when the simulation ends'''
|
"""Method to call when the simulation ends"""
|
||||||
pass
|
pass
|
||||||
|
|
||||||
def trial(self, env, stats):
|
def iteration_start(self, env):
|
||||||
'''Method to call when a trial ends'''
|
"""Method to call when a iteration start"""
|
||||||
pass
|
pass
|
||||||
|
|
||||||
def output(self, f, mode='w', **kwargs):
|
def iteration_end(self, env, params, params_id):
|
||||||
if self.dry_run:
|
"""Method to call when a iteration ends"""
|
||||||
|
pass
|
||||||
|
|
||||||
|
def output(self, f, mode="w", **kwargs):
|
||||||
|
if not self.dump:
|
||||||
f = DryRunner(f, copy_to=self.copy_to)
|
f = DryRunner(f, copy_to=self.copy_to)
|
||||||
else:
|
else:
|
||||||
try:
|
try:
|
||||||
@ -74,85 +86,197 @@ class Exporter:
|
|||||||
f = os.path.join(self.outdir, f)
|
f = os.path.join(self.outdir, f)
|
||||||
except TypeError:
|
except TypeError:
|
||||||
pass
|
pass
|
||||||
return open_or_reuse(f, mode=mode, **kwargs)
|
return open_or_reuse(f, mode=mode, backup=self.simulation.backup, **kwargs)
|
||||||
|
|
||||||
|
def get_dfs(self, env, **kwargs):
|
||||||
|
yield from get_dc_dfs(env.datacollector,
|
||||||
|
simulation_id=self.simulation.id,
|
||||||
|
iteration_id=env.id,
|
||||||
|
**kwargs)
|
||||||
|
|
||||||
|
|
||||||
class default(Exporter):
|
def get_dc_dfs(dc, **kwargs):
|
||||||
'''Default exporter. Writes sqlite results, as well as the simulation YAML'''
|
dfs = {}
|
||||||
|
dfe = dc.get_model_vars_dataframe()
|
||||||
|
dfe.index.rename("step", inplace=True)
|
||||||
|
dfs["env"] = dfe
|
||||||
|
try:
|
||||||
|
dfa = dc.get_agent_vars_dataframe()
|
||||||
|
dfa.index.rename(["step", "agent_id"], inplace=True)
|
||||||
|
dfs["agents"] = dfa
|
||||||
|
except UserWarning:
|
||||||
|
pass
|
||||||
|
for table_name in dc.tables:
|
||||||
|
dfs[table_name] = dc.get_table_dataframe(table_name)
|
||||||
|
for (name, df) in dfs.items():
|
||||||
|
for (k, v) in kwargs.items():
|
||||||
|
df[k] = v
|
||||||
|
df.set_index(["simulation_id", "iteration_id"], append=True, inplace=True)
|
||||||
|
|
||||||
def start(self):
|
yield from dfs.items()
|
||||||
if not self.dry_run:
|
|
||||||
logger.info('Dumping results to %s', self.outdir)
|
|
||||||
self.simulation.dump_yaml(outdir=self.outdir)
|
|
||||||
else:
|
|
||||||
logger.info('NOT dumping results')
|
|
||||||
|
|
||||||
def trial(self, env, stats):
|
|
||||||
if not self.dry_run:
|
|
||||||
with timer('Dumping simulation {} trial {}'.format(self.simulation.name,
|
|
||||||
env.name)):
|
|
||||||
with self.output('{}.sqlite'.format(env.name), mode='wb') as f:
|
|
||||||
env.dump_sqlite(f)
|
|
||||||
|
|
||||||
def end(self, stats):
|
|
||||||
with timer('Dumping simulation {}\'s stats'.format(self.simulation.name)):
|
|
||||||
with self.output('{}.sqlite'.format(self.simulation.name), mode='wb') as f:
|
|
||||||
self.simulation.dump_sqlite(f)
|
|
||||||
|
|
||||||
|
|
||||||
|
class SQLite(Exporter):
|
||||||
|
"""Writes sqlite results"""
|
||||||
|
sim_started = False
|
||||||
|
|
||||||
class csv(Exporter):
|
def sim_start(self):
|
||||||
'''Export the state of each environment (and its agents) in a separate CSV file'''
|
if not self.dump:
|
||||||
def trial(self, env, stats):
|
logger.debug("NOT dumping results")
|
||||||
with timer('[CSV] Dumping simulation {} trial {} @ dir {}'.format(self.simulation.name,
|
return
|
||||||
env.name,
|
self.dbpath = os.path.join(self.outdir, f"{self.simulation.name}.sqlite")
|
||||||
self.outdir)):
|
logger.info("Dumping results to %s", self.dbpath)
|
||||||
with self.output('{}.csv'.format(env.name)) as f:
|
if self.simulation.backup:
|
||||||
env.dump_csv(f)
|
try_backup(self.dbpath, remove=True)
|
||||||
|
|
||||||
with self.output('{}.stats.csv'.format(env.name)) as f:
|
if self.simulation.overwrite:
|
||||||
statwriter = csvlib.writer(f, delimiter='\t', quotechar='"', quoting=csvlib.QUOTE_ALL)
|
if os.path.exists(self.dbpath):
|
||||||
|
os.remove(self.dbpath)
|
||||||
|
|
||||||
for stat in stats:
|
self.engine = create_engine(f"sqlite:///{self.dbpath}", echo=False)
|
||||||
statwriter.writerow(stat)
|
|
||||||
|
|
||||||
|
sim_dict = {k: serialize(v)[0] for (k,v) in self.simulation.to_dict().items()}
|
||||||
|
sim_dict["simulation_id"] = self.simulation.id
|
||||||
|
df = pd.DataFrame([sim_dict])
|
||||||
|
df.to_sql("configuration", con=self.engine, if_exists="append")
|
||||||
|
|
||||||
class gexf(Exporter):
|
def iteration_end(self, env, params, params_id, *args, **kwargs):
|
||||||
def trial(self, env, stats):
|
if not self.dump:
|
||||||
if self.dry_run:
|
logger.info("Running in NO DUMP mode. Results will NOT be saved to a DB.")
|
||||||
logger.info('Not dumping GEXF in dry_run mode')
|
|
||||||
return
|
return
|
||||||
|
|
||||||
with timer('[GEXF] Dumping simulation {} trial {}'.format(self.simulation.name,
|
with timer(
|
||||||
env.name)):
|
"Dumping simulation {} iteration {}".format(self.simulation.name, env.id)
|
||||||
with self.output('{}.gexf'.format(env.name), mode='wb') as f:
|
):
|
||||||
env.dump_gexf(f)
|
|
||||||
|
pd.DataFrame([{"simulation_id": self.simulation.id,
|
||||||
|
"params_id": params_id,
|
||||||
|
"iteration_id": env.id,
|
||||||
|
"key": k,
|
||||||
|
"value": serialize(v)[0]} for (k,v) in params.items()]).to_sql("parameters", con=self.engine, if_exists="append")
|
||||||
|
|
||||||
|
for (t, df) in self.get_dfs(env, params_id=params_id):
|
||||||
|
df.to_sql(t, con=self.engine, if_exists="append")
|
||||||
|
|
||||||
|
class csv(Exporter):
|
||||||
|
"""Export the state of each environment (and its agents) a CSV file for the simulation"""
|
||||||
|
|
||||||
|
def sim_start(self):
|
||||||
|
super().sim_start()
|
||||||
|
|
||||||
|
def iteration_end(self, env, params, params_id, *args, **kwargs):
|
||||||
|
with timer(
|
||||||
|
"[CSV] Dumping simulation {} iteration {} @ dir {}".format(
|
||||||
|
self.simulation.name, env.id, self.outdir
|
||||||
|
)
|
||||||
|
):
|
||||||
|
for (df_name, df) in self.get_dfs(env, params_id=params_id):
|
||||||
|
with self.output("{}.{}.csv".format(env.id, df_name), mode="a") as f:
|
||||||
|
df.to_csv(f)
|
||||||
|
|
||||||
|
|
||||||
|
# TODO: reimplement GEXF exporting without history
|
||||||
|
class gexf(Exporter):
|
||||||
|
def iteration_end(self, env, *args, **kwargs):
|
||||||
|
if not self.dump:
|
||||||
|
logger.info("Not dumping GEXF (NO_DUMP mode)")
|
||||||
|
return
|
||||||
|
|
||||||
|
with timer(
|
||||||
|
"[GEXF] Dumping simulation {} iteration {}".format(self.simulation.name, env.id)
|
||||||
|
):
|
||||||
|
with self.output("{}.gexf".format(env.id), mode="wb") as f:
|
||||||
|
network.dump_gexf(env.history_to_graph(), f)
|
||||||
|
self.dump_gexf(env, f)
|
||||||
|
|
||||||
|
|
||||||
class dummy(Exporter):
|
class dummy(Exporter):
|
||||||
|
def sim_start(self):
|
||||||
|
with self.output("dummy", "w") as f:
|
||||||
|
f.write("simulation started @ {}\n".format(current_time()))
|
||||||
|
|
||||||
def start(self):
|
def iteration_start(self, env):
|
||||||
with self.output('dummy', 'w') as f:
|
with self.output("dummy", "w") as f:
|
||||||
f.write('simulation started @ {}\n'.format(time.time()))
|
f.write("iteration started@ {}\n".format(current_time()))
|
||||||
|
|
||||||
def trial(self, env, stats):
|
def iteration_end(self, env, *args, **kwargs):
|
||||||
with self.output('dummy', 'w') as f:
|
with self.output("dummy", "w") as f:
|
||||||
for i in env.history_to_tuples():
|
f.write("iteration ended@ {}\n".format(current_time()))
|
||||||
f.write(','.join(map(str, i)))
|
|
||||||
f.write('\n')
|
|
||||||
|
|
||||||
def sim(self, stats):
|
|
||||||
with self.output('dummy', 'a') as f:
|
|
||||||
f.write('simulation ended @ {}\n'.format(time.time()))
|
|
||||||
|
|
||||||
|
def sim_end(self):
|
||||||
|
with self.output("dummy", "a") as f:
|
||||||
|
f.write("simulation ended @ {}\n".format(current_time()))
|
||||||
|
|
||||||
|
|
||||||
class graphdrawing(Exporter):
|
class graphdrawing(Exporter):
|
||||||
|
def iteration_end(self, env, *args, **kwargs):
|
||||||
def trial(self, env, stats):
|
|
||||||
# Outside effects
|
# Outside effects
|
||||||
f = plt.figure()
|
f = plt.figure()
|
||||||
nx.draw(env.G, node_size=10, width=0.2, pos=nx.spring_layout(env.G, scale=100), ax=f.add_subplot(111))
|
nx.draw(
|
||||||
with open('graph-{}.png'.format(env.name)) as f:
|
env.G,
|
||||||
|
node_size=10,
|
||||||
|
width=0.2,
|
||||||
|
pos=nx.spring_layout(env.G, scale=100),
|
||||||
|
ax=f.add_subplot(111),
|
||||||
|
)
|
||||||
|
with open("graph-{}.png".format(env.id)) as f:
|
||||||
f.savefig(f)
|
f.savefig(f)
|
||||||
|
|
||||||
|
|
||||||
|
class summary(Exporter):
|
||||||
|
"""Print a summary of each iteration to sys.stdout"""
|
||||||
|
|
||||||
|
def iteration_end(self, env, *args, **kwargs):
|
||||||
|
msg = ""
|
||||||
|
for (t, df) in self.get_dfs(env):
|
||||||
|
if not len(df):
|
||||||
|
continue
|
||||||
|
tabs = "\t" * 2
|
||||||
|
description = indent(str(df.describe()), tabs)
|
||||||
|
last_line = indent(str(df.iloc[-1:]), tabs)
|
||||||
|
# value_counts = indent(str(df.value_counts()), tabs)
|
||||||
|
value_counts = indent(str(df.apply(lambda x: x.value_counts()).T.stack()), tabs)
|
||||||
|
|
||||||
|
msg += dedent("""
|
||||||
|
Dataframe {t}:
|
||||||
|
Last line: :
|
||||||
|
{last_line}
|
||||||
|
|
||||||
|
Description:
|
||||||
|
{description}
|
||||||
|
|
||||||
|
Value counts:
|
||||||
|
{value_counts}
|
||||||
|
|
||||||
|
""").format(**locals())
|
||||||
|
logger.info(msg)
|
||||||
|
|
||||||
|
class YAML(Exporter):
|
||||||
|
"""Writes the configuration of the simulation to a YAML file"""
|
||||||
|
|
||||||
|
def sim_start(self):
|
||||||
|
if not self.dump:
|
||||||
|
logger.debug("NOT dumping results")
|
||||||
|
return
|
||||||
|
with self.output(self.simulation.id + ".dumped.yml") as f:
|
||||||
|
logger.info(f"Dumping simulation configuration to {self.outdir}")
|
||||||
|
f.write(self.simulation.to_yaml())
|
||||||
|
|
||||||
|
class default(Exporter):
|
||||||
|
"""Default exporter. Writes sqlite results, as well as the simulation YAML"""
|
||||||
|
|
||||||
|
def __init__(self, *args, exporter_cls=[], **kwargs):
|
||||||
|
exporter_cls = exporter_cls or [YAML, SQLite]
|
||||||
|
self.inner = [cls(*args, **kwargs) for cls in exporter_cls]
|
||||||
|
|
||||||
|
def sim_start(self, *args, **kwargs):
|
||||||
|
for exporter in self.inner:
|
||||||
|
exporter.sim_start(*args, **kwargs)
|
||||||
|
|
||||||
|
def sim_end(self, *args, **kwargs):
|
||||||
|
for exporter in self.inner:
|
||||||
|
exporter.sim_end(*args, **kwargs)
|
||||||
|
|
||||||
|
def iteration_end(self, *args, **kwargs):
|
||||||
|
for exporter in self.inner:
|
||||||
|
exporter.iteration_end(*args, **kwargs)
|
||||||
|
83
soil/network.py
Normal file
@ -0,0 +1,83 @@
|
|||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
from typing import Dict
|
||||||
|
import os
|
||||||
|
import sys
|
||||||
|
import random
|
||||||
|
|
||||||
|
import networkx as nx
|
||||||
|
|
||||||
|
from . import config, serialization, basestring
|
||||||
|
|
||||||
|
|
||||||
|
def from_topology(topology, dir_path: str = None):
|
||||||
|
if topology is None:
|
||||||
|
return nx.Graph()
|
||||||
|
if isinstance(topology, nx.Graph):
|
||||||
|
return topology
|
||||||
|
|
||||||
|
# If it's a dict, assume it's a node-link graph
|
||||||
|
if isinstance(topology, dict):
|
||||||
|
try:
|
||||||
|
return nx.json_graph.node_link_graph(topology)
|
||||||
|
except Exception as ex:
|
||||||
|
raise ValueError("Unknown topology format")
|
||||||
|
|
||||||
|
# Otherwise, treat like a path
|
||||||
|
path = topology
|
||||||
|
if dir_path and not os.path.isabs(path):
|
||||||
|
path = os.path.join(dir_path, path)
|
||||||
|
extension = os.path.splitext(path)[1][1:]
|
||||||
|
kwargs = {}
|
||||||
|
if extension == "gexf":
|
||||||
|
kwargs["version"] = "1.2draft"
|
||||||
|
kwargs["node_type"] = int
|
||||||
|
try:
|
||||||
|
method = getattr(nx.readwrite, "read_" + extension)
|
||||||
|
except AttributeError:
|
||||||
|
raise AttributeError("Unknown format")
|
||||||
|
return method(path, **kwargs)
|
||||||
|
|
||||||
|
|
||||||
|
def from_params(generator, dir_path: str = None, **params):
|
||||||
|
|
||||||
|
if dir_path not in sys.path:
|
||||||
|
sys.path.append(dir_path)
|
||||||
|
|
||||||
|
method = serialization.deserializer(
|
||||||
|
generator,
|
||||||
|
known_modules=[
|
||||||
|
"networkx.generators",
|
||||||
|
],
|
||||||
|
)
|
||||||
|
return method(**params)
|
||||||
|
|
||||||
|
|
||||||
|
def find_unassigned(G, shuffle=False, random=random):
|
||||||
|
"""
|
||||||
|
Link an agent to a node in a topology.
|
||||||
|
|
||||||
|
If node_id is None, a node without an agent_id will be found.
|
||||||
|
"""
|
||||||
|
candidates = list(G.nodes(data=True))
|
||||||
|
if shuffle:
|
||||||
|
random.shuffle(candidates)
|
||||||
|
for next_id, data in candidates:
|
||||||
|
if "agent" not in data:
|
||||||
|
return next_id
|
||||||
|
return None
|
||||||
|
|
||||||
|
|
||||||
|
def dump_gexf(G, f):
|
||||||
|
for node in G.nodes():
|
||||||
|
if "pos" in G.nodes[node]:
|
||||||
|
G.nodes[node]["viz"] = {
|
||||||
|
"position": {
|
||||||
|
"x": G.nodes[node]["pos"][0],
|
||||||
|
"y": G.nodes[node]["pos"][1],
|
||||||
|
"z": 0.0,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
del G.nodes[node]["pos"]
|
||||||
|
|
||||||
|
nx.write_gexf(G, f, version="1.2draft")
|
32
soil/parameters.py
Normal file
@ -0,0 +1,32 @@
|
|||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
from typing_extensions import Annotated
|
||||||
|
import annotated_types
|
||||||
|
from typing import *
|
||||||
|
|
||||||
|
from dataclasses import dataclass
|
||||||
|
|
||||||
|
class Parameter:
|
||||||
|
pass
|
||||||
|
|
||||||
|
|
||||||
|
def floatrange(
|
||||||
|
*,
|
||||||
|
gt: Optional[float] = None,
|
||||||
|
ge: Optional[float] = None,
|
||||||
|
lt: Optional[float] = None,
|
||||||
|
le: Optional[float] = None,
|
||||||
|
multiple_of: Optional[float] = None,
|
||||||
|
) -> type[float]:
|
||||||
|
return Annotated[
|
||||||
|
float,
|
||||||
|
annotated_types.Interval(gt=gt, ge=ge, lt=lt, le=le),
|
||||||
|
annotated_types.MultipleOf(multiple_of) if multiple_of is not None else None,
|
||||||
|
]
|
||||||
|
|
||||||
|
function = Annotated[Callable, Parameter]
|
||||||
|
Integer = Annotated[int, Parameter]
|
||||||
|
Float = Annotated[float, Parameter]
|
||||||
|
|
||||||
|
|
||||||
|
probability = floatrange(ge=0, le=1)
|
@ -2,58 +2,28 @@ import os
|
|||||||
import logging
|
import logging
|
||||||
import ast
|
import ast
|
||||||
import sys
|
import sys
|
||||||
|
import re
|
||||||
import importlib
|
import importlib
|
||||||
|
import importlib.machinery, importlib.util
|
||||||
from glob import glob
|
from glob import glob
|
||||||
from itertools import product, chain
|
from itertools import product, chain
|
||||||
|
|
||||||
import yaml
|
import yaml
|
||||||
import networkx as nx
|
import networkx as nx
|
||||||
|
|
||||||
|
from . import config
|
||||||
|
|
||||||
from jinja2 import Template
|
from jinja2 import Template
|
||||||
|
|
||||||
|
|
||||||
logger = logging.getLogger('soil')
|
logger = logging.getLogger("soil")
|
||||||
|
|
||||||
|
|
||||||
def load_network(network_params, dir_path=None):
|
|
||||||
G = nx.Graph()
|
|
||||||
|
|
||||||
if 'path' in network_params:
|
|
||||||
path = network_params['path']
|
|
||||||
if dir_path and not os.path.isabs(path):
|
|
||||||
path = os.path.join(dir_path, path)
|
|
||||||
extension = os.path.splitext(path)[1][1:]
|
|
||||||
kwargs = {}
|
|
||||||
if extension == 'gexf':
|
|
||||||
kwargs['version'] = '1.2draft'
|
|
||||||
kwargs['node_type'] = int
|
|
||||||
try:
|
|
||||||
method = getattr(nx.readwrite, 'read_' + extension)
|
|
||||||
except AttributeError:
|
|
||||||
raise AttributeError('Unknown format')
|
|
||||||
G = method(path, **kwargs)
|
|
||||||
|
|
||||||
elif 'generator' in network_params:
|
|
||||||
net_args = network_params.copy()
|
|
||||||
net_gen = net_args.pop('generator')
|
|
||||||
|
|
||||||
if dir_path not in sys.path:
|
|
||||||
sys.path.append(dir_path)
|
|
||||||
|
|
||||||
method = deserializer(net_gen,
|
|
||||||
known_modules=['networkx.generators',])
|
|
||||||
G = method(**net_args)
|
|
||||||
|
|
||||||
return G
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
def load_file(infile):
|
def load_file(infile):
|
||||||
folder = os.path.dirname(infile)
|
folder = os.path.dirname(infile)
|
||||||
if folder not in sys.path:
|
if folder not in sys.path:
|
||||||
sys.path.append(folder)
|
sys.path.append(folder)
|
||||||
with open(infile, 'r') as f:
|
with open(infile, "r") as f:
|
||||||
return list(chain.from_iterable(map(expand_template, load_string(f))))
|
return list(chain.from_iterable(map(expand_template, load_string(f))))
|
||||||
|
|
||||||
|
|
||||||
@ -62,14 +32,15 @@ def load_string(string):
|
|||||||
|
|
||||||
|
|
||||||
def expand_template(config):
|
def expand_template(config):
|
||||||
if 'template' not in config:
|
if "template" not in config:
|
||||||
yield config
|
yield config
|
||||||
return
|
return
|
||||||
if 'vars' not in config:
|
if "vars" not in config:
|
||||||
raise ValueError(('You must provide a definition of variables'
|
raise ValueError(
|
||||||
' for the template.'))
|
("You must provide a definition of variables" " for the template.")
|
||||||
|
)
|
||||||
|
|
||||||
template = config['template']
|
template = config["template"]
|
||||||
|
|
||||||
if not isinstance(template, str):
|
if not isinstance(template, str):
|
||||||
template = yaml.dump(template)
|
template = yaml.dump(template)
|
||||||
@ -81,9 +52,9 @@ def expand_template(config):
|
|||||||
blank_str = template.render({k: 0 for k in params[0].keys()})
|
blank_str = template.render({k: 0 for k in params[0].keys()})
|
||||||
blank = list(load_string(blank_str))
|
blank = list(load_string(blank_str))
|
||||||
if len(blank) > 1:
|
if len(blank) > 1:
|
||||||
raise ValueError('Templates must not return more than one configuration')
|
raise ValueError("Templates must not return more than one configuration")
|
||||||
if 'name' in blank[0]:
|
if "name" in blank[0]:
|
||||||
raise ValueError('Templates cannot be named, use group instead')
|
raise ValueError("Templates cannot be named, use group instead")
|
||||||
|
|
||||||
for ps in params:
|
for ps in params:
|
||||||
string = template.render(ps)
|
string = template.render(ps)
|
||||||
@ -92,24 +63,24 @@ def expand_template(config):
|
|||||||
|
|
||||||
|
|
||||||
def params_for_template(config):
|
def params_for_template(config):
|
||||||
sampler_config = config.get('sampler', {'N': 100})
|
sampler_config = config.get("sampler", {"N": 100})
|
||||||
sampler = sampler_config.pop('method', 'SALib.sample.morris.sample')
|
sampler = sampler_config.pop("method", "SALib.sample.morris.sample")
|
||||||
sampler = deserializer(sampler)
|
sampler = deserializer(sampler)
|
||||||
bounds = config['vars']['bounds']
|
bounds = config["vars"]["bounds"]
|
||||||
|
|
||||||
problem = {
|
problem = {
|
||||||
'num_vars': len(bounds),
|
"num_vars": len(bounds),
|
||||||
'names': list(bounds.keys()),
|
"names": list(bounds.keys()),
|
||||||
'bounds': list(v for v in bounds.values())
|
"bounds": list(v for v in bounds.values()),
|
||||||
}
|
}
|
||||||
samples = sampler(problem, **sampler_config)
|
samples = sampler(problem, **sampler_config)
|
||||||
|
|
||||||
lists = config['vars'].get('lists', {})
|
lists = config["vars"].get("lists", {})
|
||||||
names = list(lists.keys())
|
names = list(lists.keys())
|
||||||
values = list(lists.values())
|
values = list(lists.values())
|
||||||
combs = list(product(*values))
|
combs = list(product(*values))
|
||||||
|
|
||||||
allnames = names + problem['names']
|
allnames = names + problem["names"]
|
||||||
allvalues = [(list(i[0]) + list(i[1])) for i in product(combs, samples)]
|
allvalues = [(list(i[0]) + list(i[1])) for i in product(combs, samples)]
|
||||||
params = list(map(lambda x: dict(zip(allnames, x)), allvalues))
|
params = list(map(lambda x: dict(zip(allnames, x)), allvalues))
|
||||||
return params
|
return params
|
||||||
@ -117,106 +88,175 @@ def params_for_template(config):
|
|||||||
|
|
||||||
def load_files(*patterns, **kwargs):
|
def load_files(*patterns, **kwargs):
|
||||||
for pattern in patterns:
|
for pattern in patterns:
|
||||||
for i in glob(pattern, **kwargs):
|
for i in glob(pattern, **kwargs, recursive=True):
|
||||||
for config in load_file(i):
|
for cfg in load_file(i):
|
||||||
path = os.path.abspath(i)
|
path = os.path.abspath(i)
|
||||||
if 'dir_path' not in config:
|
yield cfg, path
|
||||||
config['dir_path'] = os.path.dirname(path)
|
|
||||||
yield config, path
|
|
||||||
|
|
||||||
|
|
||||||
def load_config(config):
|
def load_config(cfg):
|
||||||
if isinstance(config, dict):
|
if isinstance(cfg, dict):
|
||||||
yield config, os.getcwd()
|
yield config.load_config(cfg), os.getcwd()
|
||||||
else:
|
else:
|
||||||
yield from load_files(config)
|
yield from load_files(cfg)
|
||||||
|
|
||||||
|
|
||||||
builtins = importlib.import_module('builtins')
|
builtins = importlib.import_module("builtins")
|
||||||
|
|
||||||
def name(value, known_modules=[]):
|
KNOWN_MODULES = {
|
||||||
'''Return a name that can be imported, to serialize/deserialize an object'''
|
'soil': None,
|
||||||
|
|
||||||
|
}
|
||||||
|
|
||||||
|
MODULE_FILES = {}
|
||||||
|
|
||||||
|
def add_source_file(file):
|
||||||
|
"""Add a file to the list of known modules"""
|
||||||
|
file = os.path.abspath(file)
|
||||||
|
if file in MODULE_FILES:
|
||||||
|
logger.warning(f"File {file} already added as module {MODULE_FILES[file]}. Reloading")
|
||||||
|
remove_source_file(file)
|
||||||
|
modname = f"imported_module_{len(MODULE_FILES)}"
|
||||||
|
loader = importlib.machinery.SourceFileLoader(modname, file)
|
||||||
|
spec = importlib.util.spec_from_loader(loader.name, loader)
|
||||||
|
my_module = importlib.util.module_from_spec(spec)
|
||||||
|
loader.exec_module(my_module)
|
||||||
|
MODULE_FILES[file] = modname
|
||||||
|
KNOWN_MODULES[modname] = my_module
|
||||||
|
|
||||||
|
def remove_source_file(file):
|
||||||
|
"""Remove a file from the list of known modules"""
|
||||||
|
file = os.path.abspath(file)
|
||||||
|
modname = None
|
||||||
|
try:
|
||||||
|
modname = MODULE_FILES.pop(file)
|
||||||
|
KNOWN_MODULES.pop(modname)
|
||||||
|
except KeyError as ex:
|
||||||
|
raise ValueError(f"File {file} had not been added as a module: {ex}")
|
||||||
|
|
||||||
|
def get_module(modname):
|
||||||
|
"""Get a module from the list of known modules"""
|
||||||
|
if modname not in KNOWN_MODULES or KNOWN_MODULES[modname] is None:
|
||||||
|
module = importlib.import_module(modname)
|
||||||
|
KNOWN_MODULES[modname] = module
|
||||||
|
return KNOWN_MODULES[modname]
|
||||||
|
|
||||||
|
|
||||||
|
def name(value, known_modules=KNOWN_MODULES):
|
||||||
|
"""Return a name that can be imported, to serialize/deserialize an object"""
|
||||||
if value is None:
|
if value is None:
|
||||||
return 'None'
|
return "None"
|
||||||
if not isinstance(value, type): # Get the class name first
|
if not isinstance(value, type): # Get the class name first
|
||||||
value = type(value)
|
value = type(value)
|
||||||
tname = value.__name__
|
tname = value.__name__
|
||||||
if hasattr(builtins, tname):
|
if hasattr(builtins, tname):
|
||||||
return tname
|
return tname
|
||||||
modname = value.__module__
|
modname = value.__module__
|
||||||
if modname == '__main__':
|
if modname == "__main__":
|
||||||
return tname
|
return tname
|
||||||
if known_modules and modname in known_modules:
|
if known_modules and modname in known_modules:
|
||||||
return tname
|
return tname
|
||||||
for kmod in known_modules:
|
for kmod in known_modules:
|
||||||
if not kmod:
|
module = get_module(kmod)
|
||||||
continue
|
|
||||||
module = importlib.import_module(kmod)
|
|
||||||
if hasattr(module, tname):
|
if hasattr(module, tname):
|
||||||
return tname
|
return tname
|
||||||
return '{}.{}'.format(modname, tname)
|
return "{}.{}".format(modname, tname)
|
||||||
|
|
||||||
|
|
||||||
def serializer(type_):
|
def serializer(type_):
|
||||||
if type_ != 'str' and hasattr(builtins, type_):
|
if type_ != "str" and hasattr(builtins, type_):
|
||||||
return repr
|
return repr
|
||||||
return lambda x: x
|
return lambda x: x
|
||||||
|
|
||||||
|
|
||||||
def serialize(v, known_modules=[]):
|
def serialize(v, known_modules=KNOWN_MODULES):
|
||||||
'''Get a text representation of an object.'''
|
"""Get a text representation of an object."""
|
||||||
tname = name(v, known_modules=known_modules)
|
tname = name(v, known_modules=known_modules)
|
||||||
func = serializer(tname)
|
func = serializer(tname)
|
||||||
return func(v), tname
|
return func(v), tname
|
||||||
|
|
||||||
def deserializer(type_, known_modules=[]):
|
|
||||||
|
def serialize_dict(d, known_modules=KNOWN_MODULES):
|
||||||
|
try:
|
||||||
|
d = dict(d)
|
||||||
|
except (ValueError, TypeError) as ex:
|
||||||
|
return serialize(d)[0]
|
||||||
|
for (k, v) in reversed(list(d.items())):
|
||||||
|
if isinstance(v, dict):
|
||||||
|
d[k] = serialize_dict(v, known_modules=known_modules)
|
||||||
|
elif isinstance(v, list):
|
||||||
|
for ix in range(len(v)):
|
||||||
|
v[ix] = serialize_dict(v[ix], known_modules=known_modules)
|
||||||
|
elif isinstance(v, type):
|
||||||
|
d[k] = serialize(v, known_modules=known_modules)[1]
|
||||||
|
return d
|
||||||
|
|
||||||
|
|
||||||
|
IS_CLASS = re.compile(r"<class '(.*)'>")
|
||||||
|
|
||||||
|
|
||||||
|
def deserializer(type_, known_modules=KNOWN_MODULES):
|
||||||
if type(type_) != str: # Already deserialized
|
if type(type_) != str: # Already deserialized
|
||||||
return type_
|
return type_
|
||||||
if type_ == 'str':
|
if type_ == "str":
|
||||||
return lambda x='': x
|
return lambda x="": x
|
||||||
if type_ == 'None':
|
if type_ == "None":
|
||||||
return lambda x=None: None
|
return lambda x=None: None
|
||||||
if hasattr(builtins, type_): # Check if it's a builtin type
|
if hasattr(builtins, type_): # Check if it's a builtin type
|
||||||
cls = getattr(builtins, type_)
|
cls = getattr(builtins, type_)
|
||||||
return lambda x=None: ast.literal_eval(x) if x is not None else cls()
|
return lambda x=None: ast.literal_eval(x) if x is not None else cls()
|
||||||
|
match = IS_CLASS.match(type_)
|
||||||
|
if match:
|
||||||
|
modname, tname = match.group(1).rsplit(".", 1)
|
||||||
|
module = get_module(modname)
|
||||||
|
cls = getattr(module, tname)
|
||||||
|
return getattr(cls, "deserialize", cls)
|
||||||
|
|
||||||
# Otherwise, see if we can find the module and the class
|
# Otherwise, see if we can find the module and the class
|
||||||
modules = known_modules or []
|
|
||||||
options = []
|
options = []
|
||||||
|
|
||||||
for mod in modules:
|
for mod in known_modules:
|
||||||
if mod:
|
if mod:
|
||||||
options.append((mod, type_))
|
options.append((mod, type_))
|
||||||
|
|
||||||
if '.' in type_: # Fully qualified module
|
if "." in type_: # Fully qualified module
|
||||||
module, type_ = type_.rsplit(".", 1)
|
module, type_ = type_.rsplit(".", 1)
|
||||||
options.append((module, type_))
|
options.append((module, type_))
|
||||||
|
|
||||||
errors = []
|
errors = []
|
||||||
for modname, tname in options:
|
for modname, tname in options:
|
||||||
try:
|
try:
|
||||||
module = importlib.import_module(modname)
|
module = get_module(modname)
|
||||||
cls = getattr(module, tname)
|
cls = getattr(module, tname)
|
||||||
return getattr(cls, 'deserialize', cls)
|
return getattr(cls, "deserialize", cls)
|
||||||
except (ImportError, AttributeError) as ex:
|
except (ImportError, AttributeError) as ex:
|
||||||
errors.append((modname, tname, ex))
|
errors.append((modname, tname, ex))
|
||||||
raise Exception('Could not find type {}. Tried: {}'.format(type_, errors))
|
raise ValueError('Could not find type "{}". Tried: {}'.format(type_, errors))
|
||||||
|
|
||||||
|
|
||||||
def deserialize(type_, value=None, **kwargs):
|
def deserialize(type_, value=None, globs=None, **kwargs):
|
||||||
'''Get an object from a text representation'''
|
"""Get an object from a text representation"""
|
||||||
if not isinstance(type_, str):
|
if not isinstance(type_, str):
|
||||||
return type_
|
return type_
|
||||||
|
if globs and type_ in globs:
|
||||||
|
des = globs[type_]
|
||||||
|
else:
|
||||||
|
try:
|
||||||
des = deserializer(type_, **kwargs)
|
des = deserializer(type_, **kwargs)
|
||||||
|
except ValueError as ex:
|
||||||
|
try:
|
||||||
|
des = eval(type_)
|
||||||
|
except Exception:
|
||||||
|
raise ex
|
||||||
if value is None:
|
if value is None:
|
||||||
return des
|
return des
|
||||||
return des(value)
|
return des(value)
|
||||||
|
|
||||||
|
|
||||||
def deserialize_all(names, *args, known_modules=['soil'], **kwargs):
|
def deserialize_all(names, *args, known_modules=KNOWN_MODULES, **kwargs):
|
||||||
'''Return the set of exporters for a simulation, given the exporter names'''
|
"""Return the list of deserialized objects"""
|
||||||
exporters = []
|
objects = []
|
||||||
for name in names:
|
for name in names:
|
||||||
mod = deserialize(name, known_modules=known_modules)
|
mod = deserialize(name, known_modules=known_modules)
|
||||||
exporters.append(mod(*args, **kwargs))
|
objects.append(mod(*args, **kwargs))
|
||||||
return exporters
|
return objects
|
||||||
|
|
||||||
|
@ -1,355 +1,395 @@
|
|||||||
import os
|
import os
|
||||||
import importlib
|
from time import time as current_time, strftime
|
||||||
import sys
|
import sys
|
||||||
import yaml
|
import yaml
|
||||||
import traceback
|
import hashlib
|
||||||
|
|
||||||
|
import inspect
|
||||||
import logging
|
import logging
|
||||||
import networkx as nx
|
import networkx as nx
|
||||||
|
|
||||||
from time import strftime
|
from tqdm.auto import tqdm
|
||||||
from networkx.readwrite import json_graph
|
|
||||||
from multiprocessing import Pool
|
from textwrap import dedent
|
||||||
|
|
||||||
|
from dataclasses import dataclass, field, asdict, replace
|
||||||
|
from typing import Any, Dict, Union, Optional, List
|
||||||
|
|
||||||
|
|
||||||
from functools import partial
|
from functools import partial
|
||||||
from tsih import History
|
from contextlib import contextmanager
|
||||||
|
from itertools import product
|
||||||
|
import json
|
||||||
|
|
||||||
import pickle
|
|
||||||
|
|
||||||
from . import serialization, utils, basestring, agents
|
from . import serialization, exporters, utils, basestring, agents
|
||||||
from .environment import Environment
|
from .environment import Environment
|
||||||
from .utils import logger
|
from .utils import logger, run_and_return_exceptions
|
||||||
from .exporters import default
|
from .debugging import set_trace
|
||||||
from .stats import defaultStats
|
|
||||||
|
_AVOID_RUNNING = False
|
||||||
|
_QUEUED = []
|
||||||
|
|
||||||
|
@contextmanager
|
||||||
|
def do_not_run():
|
||||||
|
global _AVOID_RUNNING
|
||||||
|
_AVOID_RUNNING = True
|
||||||
|
try:
|
||||||
|
logger.debug("NOT RUNNING")
|
||||||
|
yield
|
||||||
|
finally:
|
||||||
|
logger.debug("RUNNING AGAIN")
|
||||||
|
_AVOID_RUNNING = False
|
||||||
|
|
||||||
|
|
||||||
|
def _iter_queued():
|
||||||
|
while _QUEUED:
|
||||||
|
(cls, params) = _QUEUED.pop(0)
|
||||||
|
yield replace(cls, parameters=params)
|
||||||
|
|
||||||
|
|
||||||
# TODO: change documentation for simulation
|
# TODO: change documentation for simulation
|
||||||
|
# TODO: rename iterations to iterations
|
||||||
|
# TODO: make parameters a dict of iterable/any
|
||||||
|
@dataclass
|
||||||
class Simulation:
|
class Simulation:
|
||||||
"""
|
"""
|
||||||
Similar to nsim.NetworkSimulation with three main differences:
|
A simulation is a collection of agents and a model. It is responsible for running the model and agents, and collecting data from them.
|
||||||
1) agent type can be specified by name or by class.
|
|
||||||
2) instead of just one type, a network agents distribution can be used.
|
|
||||||
The distribution specifies the weight (or probability) of each
|
|
||||||
agent type in the topology. This is an example distribution: ::
|
|
||||||
|
|
||||||
[
|
|
||||||
{'agent_type': 'agent_type_1',
|
|
||||||
'weight': 0.2,
|
|
||||||
'state': {
|
|
||||||
'id': 0
|
|
||||||
}
|
|
||||||
},
|
|
||||||
{'agent_type': 'agent_type_2',
|
|
||||||
'weight': 0.8,
|
|
||||||
'state': {
|
|
||||||
'id': 1
|
|
||||||
}
|
|
||||||
}
|
|
||||||
]
|
|
||||||
|
|
||||||
In this example, 20% of the nodes will be marked as type
|
|
||||||
'agent_type_1'.
|
|
||||||
3) if no initial state is given, each node's state will be set
|
|
||||||
to `{'id': 0}`.
|
|
||||||
|
|
||||||
Parameters
|
|
||||||
---------
|
|
||||||
name : str, optional
|
|
||||||
name of the Simulation
|
|
||||||
group : str, optional
|
|
||||||
a group name can be used to link simulations
|
|
||||||
topology : networkx.Graph instance, optional
|
|
||||||
network_params : dict
|
|
||||||
parameters used to create a topology with networkx, if no topology is given
|
|
||||||
network_agents : dict
|
|
||||||
definition of agents to populate the topology with
|
|
||||||
agent_type : NetworkAgent subclass, optional
|
|
||||||
Default type of NetworkAgent to use for nodes not specified in network_agents
|
|
||||||
states : list, optional
|
|
||||||
List of initial states corresponding to the nodes in the topology. Basic form is a list of integers
|
|
||||||
whose value indicates the state
|
|
||||||
dir_path: str, optional
|
|
||||||
Directory path to load simulation assets (files, modules...)
|
|
||||||
seed : str, optional
|
|
||||||
Seed to use for the random generator
|
|
||||||
num_trials : int, optional
|
|
||||||
Number of independent simulation runs
|
|
||||||
max_time : int, optional
|
|
||||||
Time how long the simulation should run
|
|
||||||
environment_params : dict, optional
|
|
||||||
Dictionary of globally-shared environmental parameters
|
|
||||||
environment_agents: dict, optional
|
|
||||||
Similar to network_agents. Distribution of Agents that control the environment
|
|
||||||
environment_class: soil.environment.Environment subclass, optional
|
|
||||||
Class for the environment. It defailts to soil.environment.Environment
|
|
||||||
load_module : str, module name, deprecated
|
|
||||||
If specified, soil will load the content of this module under 'soil.agents.custom'
|
|
||||||
|
|
||||||
|
|
||||||
|
Args:
|
||||||
|
version: The version of the simulation. This is used to determine how to load the simulation.
|
||||||
|
name: The name of the simulation.
|
||||||
|
description: A description of the simulation.
|
||||||
|
group: The group that the simulation belongs to.
|
||||||
|
model: The model to use for the simulation. This can be a string or a class.
|
||||||
|
parameters: The parameters to pass to the model.
|
||||||
|
matrix: A matrix of values for each parameter.
|
||||||
|
seed: The seed to use for the simulation.
|
||||||
|
dir_path: The directory path to use for the simulation.
|
||||||
|
max_time: The maximum time to run the simulation.
|
||||||
|
max_steps: The maximum number of steps to run the simulation.
|
||||||
|
interval: The interval to use for the simulation.
|
||||||
|
iterations: The number of iterations (times) to run the simulation.
|
||||||
|
num_processes: The number of processes to use for the simulation. If greater than one, simulations will be performed in parallel. This may make debugging and error handling difficult.
|
||||||
|
tables: The tables to use in the simulation datacollector
|
||||||
|
agent_reporters: The agent reporters to use in the datacollector
|
||||||
|
model_reporters: The model reporters to use in the datacollector
|
||||||
|
dry_run: Whether or not to run the simulation. If True, the simulation will not be run.
|
||||||
|
backup: Whether or not to backup the simulation. If True, the simulation files will be backed up to a different directory.
|
||||||
|
overwrite: Whether or not to replace existing simulation data.
|
||||||
|
source_file: Python file to use to find additional classes.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
def __init__(self, name=None, group=None, topology=None, network_params=None,
|
version: str = "2"
|
||||||
network_agents=None, agent_type=None, states=None,
|
source_file: Optional[str] = None
|
||||||
default_state=None, interval=1, num_trials=1,
|
name: Optional[str] = None
|
||||||
max_time=100, load_module=None, seed=None,
|
description: Optional[str] = ""
|
||||||
dir_path=None, environment_agents=None,
|
group: str = None
|
||||||
environment_params=None, environment_class=None,
|
backup: bool = False
|
||||||
**kwargs):
|
overwrite: bool = False
|
||||||
|
dry_run: bool = False
|
||||||
|
dump: bool = False
|
||||||
|
model: Union[str, type] = "soil.Environment"
|
||||||
|
parameters: dict = field(default_factory=dict)
|
||||||
|
matrix: dict = field(default_factory=dict)
|
||||||
|
seed: str = "default"
|
||||||
|
dir_path: str = field(default_factory=lambda: os.getcwd())
|
||||||
|
max_time: float = None
|
||||||
|
max_steps: int = None
|
||||||
|
interval: int = 1
|
||||||
|
iterations: int = 1
|
||||||
|
num_processes: Optional[int] = 1
|
||||||
|
exporters: Optional[List[str]] = field(default_factory=lambda: [exporters.default])
|
||||||
|
model_reporters: Optional[Dict[str, Any]] = field(default_factory=dict)
|
||||||
|
agent_reporters: Optional[Dict[str, Any]] = field(default_factory=dict)
|
||||||
|
tables: Optional[Dict[str, Any]] = field(default_factory=dict)
|
||||||
|
outdir: str = field(default_factory=lambda: os.path.join(os.getcwd(), "soil_output"))
|
||||||
|
# outdir: Optional[str] = None
|
||||||
|
exporter_params: Optional[Dict[str, Any]] = field(default_factory=dict)
|
||||||
|
level: int = logging.INFO
|
||||||
|
skip_test: Optional[bool] = False
|
||||||
|
debug: Optional[bool] = False
|
||||||
|
|
||||||
self.load_module = load_module
|
def __post_init__(self):
|
||||||
self.network_params = network_params
|
if self.name is None:
|
||||||
self.name = name or 'Unnamed'
|
if isinstance(self.model, str):
|
||||||
self.seed = str(seed or name)
|
self.name = self.model
|
||||||
self._id = '{}_{}'.format(self.name, strftime("%Y-%m-%d_%H.%M.%S"))
|
|
||||||
self.group = group or ''
|
|
||||||
self.num_trials = num_trials
|
|
||||||
self.max_time = max_time
|
|
||||||
self.default_state = default_state or {}
|
|
||||||
self.dir_path = dir_path or os.getcwd()
|
|
||||||
self.interval = interval
|
|
||||||
|
|
||||||
sys.path += list(x for x in [os.getcwd(), self.dir_path] if x not in sys.path)
|
|
||||||
|
|
||||||
if topology is None:
|
|
||||||
topology = serialization.load_network(network_params,
|
|
||||||
dir_path=self.dir_path)
|
|
||||||
elif isinstance(topology, basestring) or isinstance(topology, dict):
|
|
||||||
topology = json_graph.node_link_graph(topology)
|
|
||||||
self.topology = nx.Graph(topology)
|
|
||||||
|
|
||||||
|
|
||||||
self.environment_params = environment_params or {}
|
|
||||||
self.environment_class = serialization.deserialize(environment_class,
|
|
||||||
known_modules=['soil.environment', ]) or Environment
|
|
||||||
|
|
||||||
environment_agents = environment_agents or []
|
|
||||||
self.environment_agents = agents._convert_agent_types(environment_agents,
|
|
||||||
known_modules=[self.load_module])
|
|
||||||
|
|
||||||
distro = agents.calculate_distribution(network_agents,
|
|
||||||
agent_type)
|
|
||||||
self.network_agents = agents._convert_agent_types(distro,
|
|
||||||
known_modules=[self.load_module])
|
|
||||||
|
|
||||||
self.states = agents._validate_states(states,
|
|
||||||
self.topology)
|
|
||||||
|
|
||||||
self._history = History(name=self.name,
|
|
||||||
backup=False)
|
|
||||||
|
|
||||||
def run_simulation(self, *args, **kwargs):
|
|
||||||
return self.run(*args, **kwargs)
|
|
||||||
|
|
||||||
def run(self, *args, **kwargs):
|
|
||||||
'''Run the simulation and return the list of resulting environments'''
|
|
||||||
return list(self.run_gen(*args, **kwargs))
|
|
||||||
|
|
||||||
def _run_sync_or_async(self, parallel=False, **kwargs):
|
|
||||||
if parallel and not os.environ.get('SENPY_DEBUG', None):
|
|
||||||
p = Pool()
|
|
||||||
func = partial(self.run_trial_exceptions, **kwargs)
|
|
||||||
for i in p.imap_unordered(func, range(self.num_trials)):
|
|
||||||
if isinstance(i, Exception):
|
|
||||||
logger.error('Trial failed:\n\t%s', i.message)
|
|
||||||
continue
|
|
||||||
yield i
|
|
||||||
else:
|
else:
|
||||||
for i in range(self.num_trials):
|
self.name = self.model.__name__
|
||||||
yield self.run_trial(trial_id=i,
|
self.logger = logger.getChild(self.name)
|
||||||
**kwargs)
|
self.logger.setLevel(self.level)
|
||||||
|
|
||||||
def run_gen(self, parallel=False, dry_run=False,
|
if self.source_file:
|
||||||
exporters=[default, ], stats=[], outdir=None, exporter_params={},
|
source_file = self.source_file
|
||||||
stats_params={}, log_level=None,
|
if not os.path.isabs(source_file):
|
||||||
**kwargs):
|
source_file = os.path.abspath(os.path.join(self.dir_path, source_file))
|
||||||
'''Run the simulation and yield the resulting environments.'''
|
serialization.add_source_file(source_file)
|
||||||
if log_level:
|
self.source_file = source_file
|
||||||
logger.setLevel(log_level)
|
|
||||||
logger.info('Using exporters: %s', exporters or [])
|
if isinstance(self.model, str):
|
||||||
logger.info('Output directory: %s', outdir)
|
self.model = serialization.deserialize(self.model)
|
||||||
exporters = serialization.deserialize_all(exporters,
|
|
||||||
|
def deserialize_reporters(reporters):
|
||||||
|
for (k, v) in reporters.items():
|
||||||
|
if isinstance(v, str) and v.startswith("py:"):
|
||||||
|
reporters[k] = serialization.deserialize(v.split(":", 1)[1])
|
||||||
|
return reporters
|
||||||
|
|
||||||
|
self.agent_reporters = deserialize_reporters(self.agent_reporters)
|
||||||
|
self.model_reporters = deserialize_reporters(self.model_reporters)
|
||||||
|
self.tables = deserialize_reporters(self.tables)
|
||||||
|
if self.source_file:
|
||||||
|
serialization.remove_source_file(self.source_file)
|
||||||
|
self.id = f"{self.name}_{current_time()}"
|
||||||
|
|
||||||
|
def run(self, **kwargs):
|
||||||
|
"""Run the simulation and return the list of resulting environments"""
|
||||||
|
if kwargs:
|
||||||
|
return replace(self, **kwargs).run()
|
||||||
|
|
||||||
|
self.logger.debug(
|
||||||
|
dedent(
|
||||||
|
"""
|
||||||
|
Simulation:
|
||||||
|
---
|
||||||
|
"""
|
||||||
|
)
|
||||||
|
+ self.to_yaml()
|
||||||
|
)
|
||||||
|
param_combinations = self._collect_params(**kwargs)
|
||||||
|
if _AVOID_RUNNING:
|
||||||
|
_QUEUED.extend((self, param) for param in param_combinations)
|
||||||
|
return []
|
||||||
|
|
||||||
|
self.logger.debug("Using exporters: %s", self.exporters or [])
|
||||||
|
|
||||||
|
exporters = serialization.deserialize_all(
|
||||||
|
self.exporters,
|
||||||
simulation=self,
|
simulation=self,
|
||||||
known_modules=['soil.exporters',],
|
known_modules=[
|
||||||
dry_run=dry_run,
|
"soil.exporters",
|
||||||
outdir=outdir,
|
],
|
||||||
**exporter_params)
|
dump=self.dump and not self.dry_run,
|
||||||
stats = serialization.deserialize_all(simulation=self,
|
outdir=self.outdir,
|
||||||
names=stats,
|
**self.exporter_params,
|
||||||
known_modules=['soil.stats',],
|
)
|
||||||
**stats_params)
|
|
||||||
|
|
||||||
with utils.timer('simulation {}'.format(self.name)):
|
results = []
|
||||||
for stat in stats:
|
for exporter in exporters:
|
||||||
stat.start()
|
exporter.sim_start()
|
||||||
|
|
||||||
|
for params in tqdm(param_combinations, desc=self.name, unit="configuration"):
|
||||||
|
for (k, v) in params.items():
|
||||||
|
tqdm.write(f"{k} = {v}")
|
||||||
|
sha = hashlib.sha256()
|
||||||
|
sha.update(repr(sorted(params.items())).encode())
|
||||||
|
params_id = sha.hexdigest()[:7]
|
||||||
|
for env in self._run_iters_for_params(params):
|
||||||
|
for exporter in exporters:
|
||||||
|
exporter.iteration_end(env, params, params_id)
|
||||||
|
results.append(env)
|
||||||
|
|
||||||
for exporter in exporters:
|
for exporter in exporters:
|
||||||
exporter.start()
|
exporter.sim_end()
|
||||||
for env in self._run_sync_or_async(parallel=parallel,
|
|
||||||
log_level=log_level,
|
|
||||||
**kwargs):
|
|
||||||
|
|
||||||
collected = list(stat.trial(env) for stat in stats)
|
return results
|
||||||
|
|
||||||
saved = self.save_stats(collected, t_step=env.now, trial_id=env.name)
|
def _collect_params(self):
|
||||||
|
|
||||||
for exporter in exporters:
|
parameters = []
|
||||||
exporter.trial(env, saved)
|
if self.parameters:
|
||||||
|
parameters.append(self.parameters)
|
||||||
|
if self.matrix:
|
||||||
|
assert isinstance(self.matrix, dict)
|
||||||
|
for values in product(*(self.matrix.values())):
|
||||||
|
parameters.append(dict(zip(self.matrix.keys(), values)))
|
||||||
|
|
||||||
|
if not parameters:
|
||||||
|
parameters = [{}]
|
||||||
|
if self.dump:
|
||||||
|
self.logger.info("Output directory: %s", self.outdir)
|
||||||
|
|
||||||
|
return parameters
|
||||||
|
|
||||||
|
def _run_iters_for_params(
|
||||||
|
self,
|
||||||
|
params
|
||||||
|
):
|
||||||
|
"""Run the simulation and yield the resulting environments."""
|
||||||
|
|
||||||
|
try:
|
||||||
|
if self.source_file:
|
||||||
|
serialization.add_source_file(self.source_file)
|
||||||
|
|
||||||
|
with utils.timer(f"running for config {params}"):
|
||||||
|
if self.dry_run:
|
||||||
|
def func(*args, **kwargs):
|
||||||
|
return None
|
||||||
|
else:
|
||||||
|
func = self._run_model
|
||||||
|
|
||||||
|
for env in tqdm(utils.run_parallel(
|
||||||
|
func=func,
|
||||||
|
iterable=range(self.iterations),
|
||||||
|
**params,
|
||||||
|
), total=self.iterations, leave=False):
|
||||||
|
if env is None and self.dry_run:
|
||||||
|
continue
|
||||||
|
|
||||||
yield env
|
yield env
|
||||||
|
finally:
|
||||||
|
if self.source_file:
|
||||||
|
serialization.remove_source_file(self.source_file)
|
||||||
|
|
||||||
|
def _get_env(self, iteration_id, params):
|
||||||
|
"""Create an environment for a iteration of the simulation"""
|
||||||
|
|
||||||
collected = list(stat.end() for stat in stats)
|
iteration_id = str(iteration_id)
|
||||||
saved = self.save_stats(collected)
|
|
||||||
|
|
||||||
for exporter in exporters:
|
agent_reporters = self.agent_reporters
|
||||||
exporter.end(saved)
|
agent_reporters.update(params.pop("agent_reporters", {}))
|
||||||
|
model_reporters = self.model_reporters
|
||||||
|
model_reporters.update(params.pop("model_reporters", {}))
|
||||||
|
|
||||||
|
return self.model(
|
||||||
|
id=iteration_id,
|
||||||
|
seed=f"{self.seed}_iteration_{iteration_id}",
|
||||||
|
dir_path=self.dir_path,
|
||||||
|
interval=self.interval,
|
||||||
|
logger=self.logger.getChild(iteration_id),
|
||||||
|
agent_reporters=agent_reporters,
|
||||||
|
model_reporters=model_reporters,
|
||||||
|
tables=self.tables,
|
||||||
|
**params,
|
||||||
|
)
|
||||||
|
|
||||||
def save_stats(self, collection, **kwargs):
|
def _run_model(self, iteration_id, **params):
|
||||||
stats = dict(kwargs)
|
|
||||||
for stat in collection:
|
|
||||||
stats.update(stat)
|
|
||||||
self._history.save_stats(utils.flatten_dict(stats))
|
|
||||||
return stats
|
|
||||||
|
|
||||||
def get_stats(self, **kwargs):
|
|
||||||
return self._history.get_stats(**kwargs)
|
|
||||||
|
|
||||||
def log_stats(self, stats):
|
|
||||||
logger.info('Stats: \n{}'.format(yaml.dump(stats, default_flow_style=False)))
|
|
||||||
|
|
||||||
|
|
||||||
def get_env(self, trial_id=0, **kwargs):
|
|
||||||
'''Create an environment for a trial of the simulation'''
|
|
||||||
opts = self.environment_params.copy()
|
|
||||||
opts.update({
|
|
||||||
'name': '{}_trial_{}'.format(self.name, trial_id),
|
|
||||||
'topology': self.topology.copy(),
|
|
||||||
'network_params': self.network_params,
|
|
||||||
'seed': '{}_trial_{}'.format(self.seed, trial_id),
|
|
||||||
'initial_time': 0,
|
|
||||||
'interval': self.interval,
|
|
||||||
'network_agents': self.network_agents,
|
|
||||||
'initial_time': 0,
|
|
||||||
'states': self.states,
|
|
||||||
'dir_path': self.dir_path,
|
|
||||||
'default_state': self.default_state,
|
|
||||||
'environment_agents': self.environment_agents,
|
|
||||||
})
|
|
||||||
opts.update(kwargs)
|
|
||||||
env = self.environment_class(**opts)
|
|
||||||
return env
|
|
||||||
|
|
||||||
def run_trial(self, trial_id=0, until=None, log_level=logging.INFO, **opts):
|
|
||||||
"""
|
"""
|
||||||
Run a single trial of the simulation
|
Run a single iteration of the simulation
|
||||||
|
|
||||||
"""
|
"""
|
||||||
if log_level:
|
# Set-up iteration environment and graph
|
||||||
logger.setLevel(log_level)
|
model = self._get_env(iteration_id, params)
|
||||||
# Set-up trial environment and graph
|
with utils.timer("Simulation {} iteration {}".format(self.name, iteration_id)):
|
||||||
until = until or self.max_time
|
|
||||||
env = self.get_env(trial_id=trial_id, **opts)
|
|
||||||
# Set up agents on nodes
|
|
||||||
with utils.timer('Simulation {} trial {}'.format(self.name, trial_id)):
|
|
||||||
env.run(until)
|
|
||||||
return env
|
|
||||||
|
|
||||||
def run_trial_exceptions(self, *args, **kwargs):
|
max_time = self.max_time
|
||||||
'''
|
max_steps = self.max_steps
|
||||||
A wrapper for run_trial that catches exceptions and returns them.
|
|
||||||
It is meant for async simulations
|
if (max_time is not None) and (max_steps is not None):
|
||||||
'''
|
is_done = lambda model: (not model.running) or (model.schedule.time >= max_time) or (model.schedule.steps >= max_steps)
|
||||||
try:
|
elif max_time is not None:
|
||||||
return self.run_trial(*args, **kwargs)
|
is_done = lambda model: (not model.running) or (model.schedule.time >= max_time)
|
||||||
except Exception as ex:
|
elif max_steps is not None:
|
||||||
if ex.__cause__ is not None:
|
is_done = lambda model: (not model.running) or (model.schedule.steps >= max_steps)
|
||||||
ex = ex.__cause__
|
else:
|
||||||
ex.message = ''.join(traceback.format_exception(type(ex), ex, ex.__traceback__)[:])
|
is_done = lambda model: not model.running
|
||||||
return ex
|
|
||||||
|
if not model.schedule.agents:
|
||||||
|
raise Exception("No agents in model. This is probably a bug. Make sure that the model has agents scheduled after its initialization.")
|
||||||
|
|
||||||
|
newline = "\n"
|
||||||
|
self.logger.debug(
|
||||||
|
dedent(
|
||||||
|
f"""
|
||||||
|
Model stats:
|
||||||
|
Agent count: { model.schedule.get_agent_count() }):
|
||||||
|
Topology size: { len(model.G) if hasattr(model, "G") else 0 }
|
||||||
|
"""
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
if self.debug:
|
||||||
|
set_trace()
|
||||||
|
|
||||||
|
while not is_done(model):
|
||||||
|
self.logger.debug(
|
||||||
|
f'Simulation time {model.schedule.time}/{max_time}.'
|
||||||
|
)
|
||||||
|
model.step()
|
||||||
|
|
||||||
|
return model
|
||||||
|
|
||||||
def to_dict(self):
|
def to_dict(self):
|
||||||
return self.__getstate__()
|
d = asdict(self)
|
||||||
|
return serialization.serialize_dict(d)
|
||||||
|
|
||||||
def to_yaml(self):
|
def to_yaml(self):
|
||||||
return yaml.dump(self.to_dict())
|
return yaml.dump(self.to_dict())
|
||||||
|
|
||||||
|
|
||||||
def dump_yaml(self, f=None, outdir=None):
|
def iter_from_file(*files, **kwargs):
|
||||||
if not f and not outdir:
|
for f in files:
|
||||||
raise ValueError('specify a file or an output directory')
|
try:
|
||||||
|
yield from iter_from_py(f, **kwargs)
|
||||||
if not f:
|
except ValueError as ex:
|
||||||
f = os.path.join(outdir, '{}.dumped.yml'.format(self.name))
|
yield from iter_from_config(f, **kwargs)
|
||||||
|
|
||||||
with utils.open_or_reuse(f, 'w') as f:
|
|
||||||
f.write(self.to_yaml())
|
|
||||||
|
|
||||||
def dump_pickle(self, f=None, outdir=None):
|
|
||||||
if not outdir and not f:
|
|
||||||
raise ValueError('specify a file or an output directory')
|
|
||||||
|
|
||||||
if not f:
|
|
||||||
f = os.path.join(outdir,
|
|
||||||
'{}.simulation.pickle'.format(self.name))
|
|
||||||
with utils.open_or_reuse(f, 'wb') as f:
|
|
||||||
pickle.dump(self, f)
|
|
||||||
|
|
||||||
def dump_sqlite(self, f):
|
|
||||||
return self._history.dump(f)
|
|
||||||
|
|
||||||
def __getstate__(self):
|
|
||||||
state={}
|
|
||||||
for k, v in self.__dict__.items():
|
|
||||||
if k[0] != '_':
|
|
||||||
state[k] = v
|
|
||||||
state['topology'] = json_graph.node_link_data(self.topology)
|
|
||||||
state['network_agents'] = agents.serialize_definition(self.network_agents,
|
|
||||||
known_modules = [])
|
|
||||||
state['environment_agents'] = agents.serialize_definition(self.environment_agents,
|
|
||||||
known_modules = [])
|
|
||||||
state['environment_class'] = serialization.serialize(self.environment_class,
|
|
||||||
known_modules=['soil.environment'])[1] # func, name
|
|
||||||
if state['load_module'] is None:
|
|
||||||
del state['load_module']
|
|
||||||
return state
|
|
||||||
|
|
||||||
def __setstate__(self, state):
|
|
||||||
self.__dict__ = state
|
|
||||||
self.load_module = getattr(self, 'load_module', None)
|
|
||||||
if self.dir_path not in sys.path:
|
|
||||||
sys.path += [self.dir_path, os.getcwd()]
|
|
||||||
self.topology = json_graph.node_link_graph(state['topology'])
|
|
||||||
self.network_agents = agents.calculate_distribution(agents._convert_agent_types(self.network_agents))
|
|
||||||
self.environment_agents = agents._convert_agent_types(self.environment_agents,
|
|
||||||
known_modules=[self.load_module])
|
|
||||||
self.environment_class = serialization.deserialize(self.environment_class,
|
|
||||||
known_modules=[self.load_module, 'soil.environment', ]) # func, name
|
|
||||||
|
|
||||||
|
|
||||||
def all_from_config(config):
|
def from_file(*args, **kwargs):
|
||||||
|
return list(iter_from_file(*args, **kwargs))
|
||||||
|
|
||||||
|
|
||||||
|
def iter_from_config(*cfgs, **kwargs):
|
||||||
|
for config in cfgs:
|
||||||
configs = list(serialization.load_config(config))
|
configs = list(serialization.load_config(config))
|
||||||
for config, _ in configs:
|
for config, path in configs:
|
||||||
sim = Simulation(**config)
|
d = dict(config)
|
||||||
yield sim
|
d.update(kwargs)
|
||||||
|
if "dir_path" not in d:
|
||||||
|
d["dir_path"] = os.path.dirname(path)
|
||||||
|
yield Simulation(**d)
|
||||||
|
|
||||||
|
|
||||||
def from_config(conf_or_path):
|
def from_config(conf_or_path):
|
||||||
config = list(serialization.load_config(conf_or_path))
|
lst = list(iter_from_config(conf_or_path))
|
||||||
if len(config) > 1:
|
if len(lst) > 1:
|
||||||
raise AttributeError('Provide only one configuration')
|
raise AttributeError("Provide only one configuration")
|
||||||
config = config[0][0]
|
return lst[0]
|
||||||
sim = Simulation(**config)
|
|
||||||
return sim
|
|
||||||
|
|
||||||
|
|
||||||
def run_from_config(*configs, **kwargs):
|
def iter_from_py(pyfile, module_name='imported_file', **kwargs):
|
||||||
for config_def in configs:
|
"""Try to load every Simulation instance in a given Python file"""
|
||||||
# logger.info("Found {} config(s)".format(len(ls)))
|
import importlib
|
||||||
for config, path in serialization.load_config(config_def):
|
added = False
|
||||||
name = config.get('name', 'unnamed')
|
sims = []
|
||||||
logger.info("Using config(s): {name}".format(name=name))
|
assert not _AVOID_RUNNING
|
||||||
|
with do_not_run():
|
||||||
|
assert _AVOID_RUNNING
|
||||||
|
spec = importlib.util.spec_from_file_location(module_name, pyfile)
|
||||||
|
folder = os.path.dirname(pyfile)
|
||||||
|
if folder not in sys.path:
|
||||||
|
added = True
|
||||||
|
sys.path.append(folder)
|
||||||
|
if not spec:
|
||||||
|
raise ValueError(f"{pyfile} does not seem to be a Python module")
|
||||||
|
module = importlib.util.module_from_spec(spec)
|
||||||
|
sys.modules[module_name] = module
|
||||||
|
spec.loader.exec_module(module)
|
||||||
|
for (_name, sim) in inspect.getmembers(module, lambda x: isinstance(x, Simulation)):
|
||||||
|
sims.append(sim)
|
||||||
|
for sim in _iter_queued():
|
||||||
|
sims.append(sim)
|
||||||
|
if not sims:
|
||||||
|
for (_name, sim) in inspect.getmembers(module, lambda x: inspect.isclass(x) and issubclass(x, Simulation)):
|
||||||
|
sims.append(sim(**kwargs))
|
||||||
|
del sys.modules[module_name]
|
||||||
|
assert not _AVOID_RUNNING
|
||||||
|
if not sims:
|
||||||
|
raise AttributeError(f"No valid configurations found in {pyfile}")
|
||||||
|
if added:
|
||||||
|
sys.path.remove(folder)
|
||||||
|
for sim in sims:
|
||||||
|
yield replace(sim, **kwargs)
|
||||||
|
|
||||||
dir_path = config.pop('dir_path', os.path.dirname(path))
|
|
||||||
sim = Simulation(dir_path=dir_path,
|
def from_py(pyfile):
|
||||||
**config)
|
return next(iter_from_py(pyfile))
|
||||||
|
|
||||||
|
|
||||||
|
def run_from_file(*files, **kwargs):
|
||||||
|
for sim in iter_from_file(*files):
|
||||||
|
logger.info(f"Using config(s): {sim.name}")
|
||||||
sim.run_simulation(**kwargs)
|
sim.run_simulation(**kwargs)
|
||||||
|
|
||||||
|
def run(env, iterations=1, num_processes=1, dump=False, name="test", **kwargs):
|
||||||
|
return Simulation(model=env, iterations=iterations, name=name, dump=dump, num_processes=num_processes, **kwargs).run()
|
106
soil/stats.py
@ -1,106 +0,0 @@
|
|||||||
import pandas as pd
|
|
||||||
|
|
||||||
from collections import Counter
|
|
||||||
|
|
||||||
class Stats:
|
|
||||||
'''
|
|
||||||
Interface for all stats. It is not necessary, but it is useful
|
|
||||||
if you don't plan to implement all the methods.
|
|
||||||
'''
|
|
||||||
|
|
||||||
def __init__(self, simulation):
|
|
||||||
self.simulation = simulation
|
|
||||||
|
|
||||||
def start(self):
|
|
||||||
'''Method to call when the simulation starts'''
|
|
||||||
pass
|
|
||||||
|
|
||||||
def end(self):
|
|
||||||
'''Method to call when the simulation ends'''
|
|
||||||
return {}
|
|
||||||
|
|
||||||
def trial(self, env):
|
|
||||||
'''Method to call when a trial ends'''
|
|
||||||
return {}
|
|
||||||
|
|
||||||
|
|
||||||
class distribution(Stats):
|
|
||||||
'''
|
|
||||||
Calculate the distribution of agent states at the end of each trial,
|
|
||||||
the mean value, and its deviation.
|
|
||||||
'''
|
|
||||||
|
|
||||||
def start(self):
|
|
||||||
self.means = []
|
|
||||||
self.counts = []
|
|
||||||
|
|
||||||
def trial(self, env):
|
|
||||||
df = env[None, None, None].df()
|
|
||||||
df = df.drop('SEED', axis=1)
|
|
||||||
ix = df.index[-1]
|
|
||||||
attrs = df.columns.get_level_values(0)
|
|
||||||
vc = {}
|
|
||||||
stats = {
|
|
||||||
'mean': {},
|
|
||||||
'count': {},
|
|
||||||
}
|
|
||||||
for a in attrs:
|
|
||||||
t = df.loc[(ix, a)]
|
|
||||||
try:
|
|
||||||
stats['mean'][a] = t.mean()
|
|
||||||
self.means.append(('mean', a, t.mean()))
|
|
||||||
except TypeError:
|
|
||||||
pass
|
|
||||||
|
|
||||||
for name, count in t.value_counts().items():
|
|
||||||
if a not in stats['count']:
|
|
||||||
stats['count'][a] = {}
|
|
||||||
stats['count'][a][name] = count
|
|
||||||
self.counts.append(('count', a, name, count))
|
|
||||||
|
|
||||||
return stats
|
|
||||||
|
|
||||||
def end(self):
|
|
||||||
dfm = pd.DataFrame(self.means, columns=['metric', 'key', 'value'])
|
|
||||||
dfc = pd.DataFrame(self.counts, columns=['metric', 'key', 'value', 'count'])
|
|
||||||
|
|
||||||
count = {}
|
|
||||||
mean = {}
|
|
||||||
|
|
||||||
if self.means:
|
|
||||||
res = dfm.drop('metric', axis=1).groupby(by=['key']).agg(['mean', 'std', 'count', 'median', 'max', 'min'])
|
|
||||||
mean = res['value'].to_dict()
|
|
||||||
if self.counts:
|
|
||||||
res = dfc.drop('metric', axis=1).groupby(by=['key', 'value']).agg(['mean', 'std', 'count', 'median', 'max', 'min'])
|
|
||||||
for k,v in res['count'].to_dict().items():
|
|
||||||
if k not in count:
|
|
||||||
count[k] = {}
|
|
||||||
for tup, times in v.items():
|
|
||||||
subkey, subcount = tup
|
|
||||||
if subkey not in count[k]:
|
|
||||||
count[k][subkey] = {}
|
|
||||||
count[k][subkey][subcount] = times
|
|
||||||
|
|
||||||
|
|
||||||
return {'count': count, 'mean': mean}
|
|
||||||
|
|
||||||
|
|
||||||
class defaultStats(Stats):
|
|
||||||
|
|
||||||
def trial(self, env):
|
|
||||||
c = Counter()
|
|
||||||
c.update(a.__class__.__name__ for a in env.network_agents)
|
|
||||||
|
|
||||||
c2 = Counter()
|
|
||||||
c2.update(a['id'] for a in env.network_agents)
|
|
||||||
|
|
||||||
return {
|
|
||||||
'network ': {
|
|
||||||
'n_nodes': env.G.number_of_nodes(),
|
|
||||||
'n_edges': env.G.number_of_edges(),
|
|
||||||
},
|
|
||||||
'agents': {
|
|
||||||
'model_count': dict(c),
|
|
||||||
'state_count': dict(c2),
|
|
||||||
}
|
|
||||||
}
|
|
185
soil/time.py
@ -1,12 +1,22 @@
|
|||||||
from mesa.time import BaseScheduler
|
from mesa.time import BaseScheduler
|
||||||
from queue import Empty
|
from queue import Empty
|
||||||
from heapq import heappush, heappop
|
from heapq import heappush, heappop, heapreplace
|
||||||
import math
|
import math
|
||||||
|
|
||||||
|
from inspect import getsource
|
||||||
|
from numbers import Number
|
||||||
|
from textwrap import dedent
|
||||||
|
|
||||||
from .utils import logger
|
from .utils import logger
|
||||||
from mesa import Agent
|
from mesa import Agent as MesaAgent
|
||||||
|
|
||||||
|
|
||||||
INFINITY = float('inf')
|
INFINITY = float("inf")
|
||||||
|
|
||||||
|
|
||||||
|
class DeadAgent(Exception):
|
||||||
|
pass
|
||||||
|
|
||||||
|
|
||||||
class When:
|
class When:
|
||||||
def __init__(self, time):
|
def __init__(self, time):
|
||||||
@ -17,6 +27,10 @@ class When:
|
|||||||
def abs(self, time):
|
def abs(self, time):
|
||||||
return self._time
|
return self._time
|
||||||
|
|
||||||
|
def schedule_next(self, time, delta, first=False):
|
||||||
|
return (self._time, None)
|
||||||
|
|
||||||
|
|
||||||
NEVER = When(INFINITY)
|
NEVER = When(INFINITY)
|
||||||
|
|
||||||
|
|
||||||
@ -24,11 +38,53 @@ class Delta(When):
|
|||||||
def __init__(self, delta):
|
def __init__(self, delta):
|
||||||
self._delta = delta
|
self._delta = delta
|
||||||
|
|
||||||
def __eq__(self, other):
|
|
||||||
return self._delta == other._delta
|
|
||||||
|
|
||||||
def abs(self, time):
|
def abs(self, time):
|
||||||
return time + self._delta
|
return self._time + self._delta
|
||||||
|
|
||||||
|
def __eq__(self, other):
|
||||||
|
if isinstance(other, Delta):
|
||||||
|
return self._delta == other._delta
|
||||||
|
return False
|
||||||
|
|
||||||
|
def schedule_next(self, time, delta, first=False):
|
||||||
|
return (time + self._delta, None)
|
||||||
|
|
||||||
|
def __repr__(self):
|
||||||
|
return str(f"Delta({self._delta})")
|
||||||
|
|
||||||
|
|
||||||
|
class BaseCond:
|
||||||
|
def __init__(self, msg=None, delta=None, eager=False):
|
||||||
|
self._msg = msg
|
||||||
|
self._delta = delta
|
||||||
|
self.eager = eager
|
||||||
|
|
||||||
|
def schedule_next(self, time, delta, first=False):
|
||||||
|
if first and self.eager:
|
||||||
|
return (time, self)
|
||||||
|
if self._delta:
|
||||||
|
delta = self._delta
|
||||||
|
return (time + delta, self)
|
||||||
|
|
||||||
|
def return_value(self, agent):
|
||||||
|
return None
|
||||||
|
|
||||||
|
def __repr__(self):
|
||||||
|
return self._msg or self.__class__.__name__
|
||||||
|
|
||||||
|
|
||||||
|
class Cond(BaseCond):
|
||||||
|
def __init__(self, func, *args, **kwargs):
|
||||||
|
self._func = func
|
||||||
|
super().__init__(*args, **kwargs)
|
||||||
|
|
||||||
|
def ready(self, agent, time):
|
||||||
|
return self._func(agent)
|
||||||
|
|
||||||
|
def __repr__(self):
|
||||||
|
if self._msg:
|
||||||
|
return self._msg
|
||||||
|
return str(f'Cond("{dedent(getsource(self._func)).strip()}")')
|
||||||
|
|
||||||
|
|
||||||
class TimedActivation(BaseScheduler):
|
class TimedActivation(BaseScheduler):
|
||||||
@ -36,45 +92,122 @@ class TimedActivation(BaseScheduler):
|
|||||||
In each activation, each agent will update its 'next_time'.
|
In each activation, each agent will update its 'next_time'.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
def __init__(self, *args, **kwargs):
|
def __init__(self, *args, shuffle=True, **kwargs):
|
||||||
super().__init__(self)
|
super().__init__(*args, **kwargs)
|
||||||
|
self._next = {}
|
||||||
self._queue = []
|
self._queue = []
|
||||||
self.next_time = 0
|
self._shuffle = shuffle
|
||||||
|
# self.step_interval = getattr(self.model, "interval", 1)
|
||||||
|
self.step_interval = self.model.interval
|
||||||
|
self.logger = getattr(self.model, "logger", logger).getChild(f"time_{ self.model }")
|
||||||
|
self.next_time = self.time
|
||||||
|
|
||||||
def add(self, agent: Agent):
|
def add(self, agent: MesaAgent, when=None):
|
||||||
if agent.unique_id not in self._agents:
|
if when is None:
|
||||||
heappush(self._queue, (self.time, agent.unique_id))
|
when = self.time
|
||||||
|
elif isinstance(when, When):
|
||||||
|
when = when.abs()
|
||||||
|
|
||||||
|
self._schedule(agent, None, when)
|
||||||
super().add(agent)
|
super().add(agent)
|
||||||
|
|
||||||
|
def _schedule(self, agent, condition=None, when=None, replace=False):
|
||||||
|
if condition:
|
||||||
|
if not when:
|
||||||
|
when, condition = condition.schedule_next(
|
||||||
|
when or self.time, self.step_interval
|
||||||
|
)
|
||||||
|
else:
|
||||||
|
if when is None:
|
||||||
|
when = self.time + self.step_interval
|
||||||
|
condition = None
|
||||||
|
if self._shuffle:
|
||||||
|
key = (when, self.model.random.random(), condition)
|
||||||
|
else:
|
||||||
|
key = (when, agent.unique_id, condition)
|
||||||
|
self._next[agent.unique_id] = key
|
||||||
|
if replace:
|
||||||
|
heapreplace(self._queue, (key, agent))
|
||||||
|
else:
|
||||||
|
heappush(self._queue, (key, agent))
|
||||||
|
|
||||||
def step(self) -> None:
|
def step(self) -> None:
|
||||||
"""
|
"""
|
||||||
Executes agents in order, one at a time. After each step,
|
Executes agents in order, one at a time. After each step,
|
||||||
an agent will signal when it wants to be scheduled next.
|
an agent will signal when it wants to be scheduled next.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
if self.next_time == INFINITY:
|
self.logger.debug(f"Simulation step {self.time}")
|
||||||
|
if not self.model.running or self.time == INFINITY:
|
||||||
return
|
return
|
||||||
|
|
||||||
self.time = self.next_time
|
self.logger.debug(f"Queue length: %s", len(self._queue))
|
||||||
when = self.time
|
|
||||||
|
|
||||||
while self._queue and self._queue[0][0] == self.time:
|
while self._queue:
|
||||||
(when, agent_id) = heappop(self._queue)
|
((when, _id, cond), agent) = self._queue[0]
|
||||||
logger.debug(f'Stepping agent {agent_id}')
|
if when > self.time:
|
||||||
|
break
|
||||||
|
|
||||||
returned = self._agents[agent_id].step()
|
if cond:
|
||||||
when = (returned or Delta(1)).abs(self.time)
|
if not cond.ready(agent, self.time):
|
||||||
if when < self.time:
|
self._schedule(agent, cond, replace=True)
|
||||||
raise Exception("Cannot schedule an agent for a time in the past ({} < {})".format(when, self.time))
|
continue
|
||||||
|
try:
|
||||||
|
agent._last_return = cond.return_value(agent)
|
||||||
|
except Exception as ex:
|
||||||
|
agent._last_except = ex
|
||||||
|
else:
|
||||||
|
agent._last_return = None
|
||||||
|
agent._last_except = None
|
||||||
|
|
||||||
heappush(self._queue, (when, agent_id))
|
self.logger.debug("Stepping agent %s", agent)
|
||||||
|
self._next.pop(agent.unique_id, None)
|
||||||
|
|
||||||
|
try:
|
||||||
|
returned = agent.step()
|
||||||
|
except DeadAgent:
|
||||||
|
agent.alive = False
|
||||||
|
heappop(self._queue)
|
||||||
|
continue
|
||||||
|
|
||||||
|
# Check status for MESA agents
|
||||||
|
if not getattr(agent, "alive", True):
|
||||||
|
heappop(self._queue)
|
||||||
|
continue
|
||||||
|
|
||||||
|
if returned:
|
||||||
|
next_check = returned.schedule_next(
|
||||||
|
self.time, self.step_interval, first=True
|
||||||
|
)
|
||||||
|
self._schedule(agent, when=next_check[0], condition=next_check[1], replace=True)
|
||||||
|
else:
|
||||||
|
next_check = (self.time + self.step_interval, None)
|
||||||
|
|
||||||
|
self._schedule(agent, replace=True)
|
||||||
|
|
||||||
self.steps += 1
|
self.steps += 1
|
||||||
|
|
||||||
if not self._queue:
|
if not self._queue:
|
||||||
|
self.model.running = False
|
||||||
self.time = INFINITY
|
self.time = INFINITY
|
||||||
self.next_time = INFINITY
|
|
||||||
return
|
return
|
||||||
|
|
||||||
self.next_time = self._queue[0][0]
|
next_time = self._queue[0][0][0]
|
||||||
|
|
||||||
|
if next_time < self.time:
|
||||||
|
raise Exception(
|
||||||
|
f"An agent has been scheduled for a time in the past, there is probably an error ({when} < {self.time})"
|
||||||
|
)
|
||||||
|
self.logger.debug("Updating time step: %s -> %s ", self.time, next_time)
|
||||||
|
|
||||||
|
self.time = next_time
|
||||||
|
|
||||||
|
|
||||||
|
class ShuffledTimedActivation(TimedActivation):
|
||||||
|
def __init__(self, *args, **kwargs):
|
||||||
|
super().__init__(*args, shuffle=True, **kwargs)
|
||||||
|
|
||||||
|
|
||||||
|
class OrderedTimedActivation(TimedActivation):
|
||||||
|
def __init__(self, *args, **kwargs):
|
||||||
|
super().__init__(*args, shuffle=False, **kwargs)
|
||||||
|
127
soil/utils.py
@ -1,71 +1,106 @@
|
|||||||
import logging
|
import logging
|
||||||
import time
|
from time import time as current_time, strftime, gmtime, localtime
|
||||||
import os
|
import os
|
||||||
|
import traceback
|
||||||
|
|
||||||
from shutil import copyfile
|
from functools import partial
|
||||||
|
from shutil import copyfile, move
|
||||||
|
from multiprocessing import Pool, cpu_count
|
||||||
|
|
||||||
from contextlib import contextmanager
|
from contextlib import contextmanager
|
||||||
|
|
||||||
logger = logging.getLogger('soil')
|
logger = logging.getLogger("soil")
|
||||||
# logging.basicConfig()
|
logger.setLevel(logging.WARNING)
|
||||||
# logger.setLevel(logging.INFO)
|
|
||||||
|
timeformat = "%H:%M:%S"
|
||||||
|
|
||||||
|
if os.environ.get("SOIL_VERBOSE", ""):
|
||||||
|
logformat = "[%(levelname)-5.5s][%(asctime)s][%(name)s]: %(message)s"
|
||||||
|
else:
|
||||||
|
logformat = "[%(levelname)-5.5s][%(asctime)s] %(message)s"
|
||||||
|
|
||||||
|
logFormatter = logging.Formatter(logformat, timeformat)
|
||||||
|
consoleHandler = logging.StreamHandler()
|
||||||
|
consoleHandler.setFormatter(logFormatter)
|
||||||
|
|
||||||
|
logging.basicConfig(
|
||||||
|
level=logging.INFO,
|
||||||
|
handlers=[
|
||||||
|
consoleHandler,
|
||||||
|
],
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
@contextmanager
|
@contextmanager
|
||||||
def timer(name='task', pre="", function=logger.info, to_object=None):
|
def timer(name="task", pre="", function=logger.info, to_object=None):
|
||||||
start = time.time()
|
start = current_time()
|
||||||
function('{}Starting {} at {}.'.format(pre, name,
|
function("{}Starting {} at {}.".format(pre, name, strftime("%X", gmtime(start))))
|
||||||
time.strftime("%X", time.gmtime(start))))
|
|
||||||
yield start
|
yield start
|
||||||
end = time.time()
|
end = current_time()
|
||||||
function('{}Finished {} at {} in {} seconds'.format(pre, name,
|
function(
|
||||||
time.strftime("%X", time.gmtime(end)),
|
"{}Finished {} at {} in {} seconds".format(
|
||||||
str(end-start)))
|
pre, name, strftime("%X", gmtime(end)), str(end - start)
|
||||||
|
)
|
||||||
|
)
|
||||||
if to_object:
|
if to_object:
|
||||||
to_object.start = start
|
to_object.start = start
|
||||||
to_object.end = end
|
to_object.end = end
|
||||||
|
|
||||||
|
|
||||||
|
def try_backup(path, remove=False):
|
||||||
|
if not os.path.exists(path):
|
||||||
def safe_open(path, mode='r', backup=True, **kwargs):
|
return None
|
||||||
outdir = os.path.dirname(path)
|
outdir = os.path.dirname(path)
|
||||||
if outdir and not os.path.exists(outdir):
|
if outdir and not os.path.exists(outdir):
|
||||||
os.makedirs(outdir)
|
os.makedirs(outdir)
|
||||||
if backup and 'w' in mode and os.path.exists(path):
|
|
||||||
creation = os.path.getctime(path)
|
creation = os.path.getctime(path)
|
||||||
stamp = time.strftime('%Y-%m-%d_%H.%M.%S', time.localtime(creation))
|
stamp = strftime("%Y-%m-%d_%H.%M.%S", localtime(creation))
|
||||||
|
|
||||||
backup_dir = os.path.join(outdir, 'backup')
|
backup_dir = os.path.join(outdir, "backup")
|
||||||
if not os.path.exists(backup_dir):
|
if not os.path.exists(backup_dir):
|
||||||
os.makedirs(backup_dir)
|
os.makedirs(backup_dir)
|
||||||
newpath = os.path.join(backup_dir, '{}@{}'.format(os.path.basename(path),
|
newpath = os.path.join(backup_dir, "{}@{}".format(os.path.basename(path), stamp))
|
||||||
stamp))
|
if remove:
|
||||||
|
move(path, newpath)
|
||||||
|
else:
|
||||||
copyfile(path, newpath)
|
copyfile(path, newpath)
|
||||||
|
return newpath
|
||||||
|
|
||||||
|
|
||||||
|
def safe_open(path, mode="r", backup=True, **kwargs):
|
||||||
|
outdir = os.path.dirname(path)
|
||||||
|
if outdir and not os.path.exists(outdir):
|
||||||
|
os.makedirs(outdir)
|
||||||
|
if backup and "w" in mode:
|
||||||
|
try_backup(path)
|
||||||
return open(path, mode=mode, **kwargs)
|
return open(path, mode=mode, **kwargs)
|
||||||
|
|
||||||
|
|
||||||
|
@contextmanager
|
||||||
def open_or_reuse(f, *args, **kwargs):
|
def open_or_reuse(f, *args, **kwargs):
|
||||||
try:
|
try:
|
||||||
return safe_open(f, *args, **kwargs)
|
with safe_open(f, *args, **kwargs) as f:
|
||||||
except (AttributeError, TypeError):
|
yield f
|
||||||
return f
|
except (AttributeError, TypeError) as ex:
|
||||||
|
yield f
|
||||||
|
|
||||||
|
|
||||||
def flatten_dict(d):
|
def flatten_dict(d):
|
||||||
if not isinstance(d, dict):
|
if not isinstance(d, dict):
|
||||||
return d
|
return d
|
||||||
return dict(_flatten_dict(d))
|
return dict(_flatten_dict(d))
|
||||||
|
|
||||||
def _flatten_dict(d, prefix=''):
|
|
||||||
|
def _flatten_dict(d, prefix=""):
|
||||||
if not isinstance(d, dict):
|
if not isinstance(d, dict):
|
||||||
# print('END:', prefix, d)
|
# print('END:', prefix, d)
|
||||||
yield prefix, d
|
yield prefix, d
|
||||||
return
|
return
|
||||||
if prefix:
|
if prefix:
|
||||||
prefix = prefix + '.'
|
prefix = prefix + "."
|
||||||
for k, v in d.items():
|
for k, v in d.items():
|
||||||
# print(k, v)
|
# print(k, v)
|
||||||
res = list(_flatten_dict(v, prefix='{}{}'.format(prefix, k)))
|
res = list(_flatten_dict(v, prefix="{}{}".format(prefix, k)))
|
||||||
# print('RES:', res)
|
# print('RES:', res)
|
||||||
yield from res
|
yield from res
|
||||||
|
|
||||||
@ -77,7 +112,7 @@ def unflatten_dict(d):
|
|||||||
if not isinstance(k, str):
|
if not isinstance(k, str):
|
||||||
target[k] = v
|
target[k] = v
|
||||||
continue
|
continue
|
||||||
tokens = k.split('.')
|
tokens = k.split(".")
|
||||||
if len(tokens) < 2:
|
if len(tokens) < 2:
|
||||||
target[k] = v
|
target[k] = v
|
||||||
continue
|
continue
|
||||||
@ -87,3 +122,39 @@ def unflatten_dict(d):
|
|||||||
target = target[token]
|
target = target[token]
|
||||||
target[tokens[-1]] = v
|
target[tokens[-1]] = v
|
||||||
return out
|
return out
|
||||||
|
|
||||||
|
|
||||||
|
def run_and_return_exceptions(func, *args, **kwargs):
|
||||||
|
"""
|
||||||
|
A wrapper for a function that catches exceptions and returns them.
|
||||||
|
It is meant for async simulations.
|
||||||
|
"""
|
||||||
|
try:
|
||||||
|
return func(*args, **kwargs)
|
||||||
|
except Exception as ex:
|
||||||
|
if ex.__cause__ is not None:
|
||||||
|
ex = ex.__cause__
|
||||||
|
ex.message = "".join(
|
||||||
|
traceback.format_exception(type(ex), ex, ex.__traceback__)[:]
|
||||||
|
)
|
||||||
|
return ex
|
||||||
|
|
||||||
|
|
||||||
|
def run_parallel(func, iterable, num_processes=1, **kwargs):
|
||||||
|
if num_processes > 1 and not os.environ.get("SOIL_DEBUG", None):
|
||||||
|
if num_processes < 1:
|
||||||
|
num_processes = cpu_count() - num_processes
|
||||||
|
p = Pool(processes=num_processes)
|
||||||
|
wrapped_func = partial(run_and_return_exceptions, func, **kwargs)
|
||||||
|
for i in p.imap_unordered(wrapped_func, iterable):
|
||||||
|
if isinstance(i, Exception):
|
||||||
|
logger.error("Trial failed:\n\t%s", i.message)
|
||||||
|
continue
|
||||||
|
yield i
|
||||||
|
else:
|
||||||
|
for i in iterable:
|
||||||
|
yield func(i, **kwargs)
|
||||||
|
|
||||||
|
|
||||||
|
def int_seed(seed: str):
|
||||||
|
return int.from_bytes(seed.encode(), "little")
|
@ -4,7 +4,7 @@ import logging
|
|||||||
logger = logging.getLogger(__name__)
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
ROOT = os.path.dirname(__file__)
|
ROOT = os.path.dirname(__file__)
|
||||||
DEFAULT_FILE = os.path.join(ROOT, 'VERSION')
|
DEFAULT_FILE = os.path.join(ROOT, "VERSION")
|
||||||
|
|
||||||
|
|
||||||
def read_version(versionfile=DEFAULT_FILE):
|
def read_version(versionfile=DEFAULT_FILE):
|
||||||
@ -12,9 +12,10 @@ def read_version(versionfile=DEFAULT_FILE):
|
|||||||
with open(versionfile) as f:
|
with open(versionfile) as f:
|
||||||
return f.read().strip()
|
return f.read().strip()
|
||||||
except IOError: # pragma: no cover
|
except IOError: # pragma: no cover
|
||||||
logger.error(('Running an unknown version of {}.'
|
logger.error(
|
||||||
'Be careful!.').format(__name__))
|
("Running an unknown version of {}." "Be careful!.").format(__name__)
|
||||||
return '0.0'
|
)
|
||||||
|
return "0.0"
|
||||||
|
|
||||||
|
|
||||||
__version__ = read_version()
|
__version__ = read_version()
|
||||||
|
@ -1,5 +0,0 @@
|
|||||||
from mesa.visualization.UserParam import UserSettableParameter
|
|
||||||
|
|
||||||
class UserSettableParameter(UserSettableParameter):
|
|
||||||
def __str__(self):
|
|
||||||
return self.value
|
|
@ -20,6 +20,7 @@ from tornado.concurrent import run_on_executor
|
|||||||
from concurrent.futures import ThreadPoolExecutor
|
from concurrent.futures import ThreadPoolExecutor
|
||||||
|
|
||||||
from ..simulation import Simulation
|
from ..simulation import Simulation
|
||||||
|
|
||||||
logger = logging.getLogger(__name__)
|
logger = logging.getLogger(__name__)
|
||||||
logger.setLevel(logging.INFO)
|
logger.setLevel(logging.INFO)
|
||||||
|
|
||||||
@ -31,21 +32,24 @@ LOGGING_INTERVAL = 0.5
|
|||||||
# Workaround to let Soil load the required modules
|
# Workaround to let Soil load the required modules
|
||||||
sys.path.append(ROOT)
|
sys.path.append(ROOT)
|
||||||
|
|
||||||
|
|
||||||
class PageHandler(tornado.web.RequestHandler):
|
class PageHandler(tornado.web.RequestHandler):
|
||||||
"""Handler for the HTML template which holds the visualization."""
|
"""Handler for the HTML template which holds the visualization."""
|
||||||
|
|
||||||
def get(self):
|
def get(self):
|
||||||
self.render('index.html', port=self.application.port,
|
self.render(
|
||||||
name=self.application.name)
|
"index.html", port=self.application.port, name=self.application.name
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
class SocketHandler(tornado.websocket.WebSocketHandler):
|
class SocketHandler(tornado.websocket.WebSocketHandler):
|
||||||
"""Handler for websocket."""
|
"""Handler for websocket."""
|
||||||
|
|
||||||
executor = ThreadPoolExecutor(max_workers=MAX_WORKERS)
|
executor = ThreadPoolExecutor(max_workers=MAX_WORKERS)
|
||||||
|
|
||||||
def open(self):
|
def open(self):
|
||||||
if self.application.verbose:
|
if self.application.verbose:
|
||||||
logger.info('Socket opened!')
|
logger.info("Socket opened!")
|
||||||
|
|
||||||
def check_origin(self, origin):
|
def check_origin(self, origin):
|
||||||
return True
|
return True
|
||||||
@ -55,116 +59,156 @@ class SocketHandler(tornado.websocket.WebSocketHandler):
|
|||||||
|
|
||||||
msg = tornado.escape.json_decode(message)
|
msg = tornado.escape.json_decode(message)
|
||||||
|
|
||||||
if msg['type'] == 'config_file':
|
if msg["type"] == "config_file":
|
||||||
|
|
||||||
if self.application.verbose:
|
if self.application.verbose:
|
||||||
print(msg['data'])
|
print(msg["data"])
|
||||||
|
|
||||||
self.config = list(yaml.load_all(msg['data']))
|
self.config = list(yaml.load_all(msg["data"]))
|
||||||
|
|
||||||
if len(self.config) > 1:
|
if len(self.config) > 1:
|
||||||
error = 'Please, provide only one configuration.'
|
error = "Please, provide only one configuration."
|
||||||
if self.application.verbose:
|
if self.application.verbose:
|
||||||
logger.error(error)
|
logger.error(error)
|
||||||
self.write_message({'type': 'error',
|
self.write_message({"type": "error", "error": error})
|
||||||
'error': error})
|
|
||||||
return
|
return
|
||||||
|
|
||||||
self.config = self.config[0]
|
self.config = self.config[0]
|
||||||
self.send_log('INFO.' + self.simulation_name,
|
self.send_log(
|
||||||
'Using config: {name}'.format(name=self.config['name']))
|
"INFO." + self.simulation_name,
|
||||||
|
"Using config: {name}".format(name=self.config["name"]),
|
||||||
|
)
|
||||||
|
|
||||||
if 'visualization_params' in self.config:
|
if "visualization_params" in self.config:
|
||||||
self.write_message({'type': 'visualization_params',
|
self.write_message(
|
||||||
'data': self.config['visualization_params']})
|
{
|
||||||
self.name = self.config['name']
|
"type": "visualization_params",
|
||||||
|
"data": self.config["visualization_params"],
|
||||||
|
}
|
||||||
|
)
|
||||||
|
self.name = self.config["name"]
|
||||||
self.run_simulation()
|
self.run_simulation()
|
||||||
|
|
||||||
settings = []
|
settings = []
|
||||||
for key in self.config['environment_params']:
|
for key in self.config["environment_params"]:
|
||||||
if type(self.config['environment_params'][key]) == float or type(self.config['environment_params'][key]) == int:
|
if (
|
||||||
if self.config['environment_params'][key] <= 1:
|
type(self.config["environment_params"][key]) == float
|
||||||
setting_type = 'number'
|
or type(self.config["environment_params"][key]) == int
|
||||||
|
):
|
||||||
|
if self.config["environment_params"][key] <= 1:
|
||||||
|
setting_type = "number"
|
||||||
else:
|
else:
|
||||||
setting_type = 'great_number'
|
setting_type = "great_number"
|
||||||
elif type(self.config['environment_params'][key]) == bool:
|
elif type(self.config["environment_params"][key]) == bool:
|
||||||
setting_type = 'boolean'
|
setting_type = "boolean"
|
||||||
else:
|
else:
|
||||||
setting_type = 'undefined'
|
setting_type = "undefined"
|
||||||
|
|
||||||
settings.append({
|
settings.append(
|
||||||
'label': key,
|
{
|
||||||
'type': setting_type,
|
"label": key,
|
||||||
'value': self.config['environment_params'][key]
|
"type": setting_type,
|
||||||
})
|
"value": self.config["environment_params"][key],
|
||||||
|
}
|
||||||
|
)
|
||||||
|
|
||||||
self.write_message({'type': 'settings',
|
self.write_message({"type": "settings", "data": settings})
|
||||||
'data': settings})
|
|
||||||
|
|
||||||
elif msg['type'] == 'get_trial':
|
elif msg["type"] == "get_trial":
|
||||||
if self.application.verbose:
|
if self.application.verbose:
|
||||||
logger.info('Trial {} requested!'.format(msg['data']))
|
logger.info("Trial {} requested!".format(msg["data"]))
|
||||||
self.send_log('INFO.' + __name__, 'Trial {} requested!'.format(msg['data']))
|
self.send_log("INFO." + __name__, "Trial {} requested!".format(msg["data"]))
|
||||||
self.write_message({'type': 'get_trial',
|
self.write_message(
|
||||||
'data': self.get_trial(int(msg['data']))})
|
{"type": "get_trial", "data": self.get_trial(int(msg["data"]))}
|
||||||
|
)
|
||||||
|
|
||||||
elif msg['type'] == 'run_simulation':
|
elif msg["type"] == "run_simulation":
|
||||||
if self.application.verbose:
|
if self.application.verbose:
|
||||||
logger.info('Running new simulation for {name}'.format(name=self.config['name']))
|
logger.info(
|
||||||
self.send_log('INFO.' + self.simulation_name, 'Running new simulation for {name}'.format(name=self.config['name']))
|
"Running new simulation for {name}".format(name=self.config["name"])
|
||||||
self.config['environment_params'] = msg['data']
|
)
|
||||||
|
self.send_log(
|
||||||
|
"INFO." + self.simulation_name,
|
||||||
|
"Running new simulation for {name}".format(name=self.config["name"]),
|
||||||
|
)
|
||||||
|
self.config["environment_params"] = msg["data"]
|
||||||
self.run_simulation()
|
self.run_simulation()
|
||||||
|
|
||||||
elif msg['type'] == 'download_gexf':
|
elif msg["type"] == "download_gexf":
|
||||||
G = self.trials[ int(msg['data']) ].history_to_graph()
|
G = self.trials[int(msg["data"])].history_to_graph()
|
||||||
for node in G.nodes():
|
for node in G.nodes():
|
||||||
if 'pos' in G.nodes[node]:
|
if "pos" in G.nodes[node]:
|
||||||
G.nodes[node]['viz'] = {"position": {"x": G.nodes[node]['pos'][0], "y": G.nodes[node]['pos'][1], "z": 0.0}}
|
G.nodes[node]["viz"] = {
|
||||||
del (G.nodes[node]['pos'])
|
"position": {
|
||||||
writer = nx.readwrite.gexf.GEXFWriter(version='1.2draft')
|
"x": G.nodes[node]["pos"][0],
|
||||||
|
"y": G.nodes[node]["pos"][1],
|
||||||
|
"z": 0.0,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
del G.nodes[node]["pos"]
|
||||||
|
writer = nx.readwrite.gexf.GEXFWriter(version="1.2draft")
|
||||||
writer.add_graph(G)
|
writer.add_graph(G)
|
||||||
self.write_message({'type': 'download_gexf',
|
self.write_message(
|
||||||
'filename': self.config['name'] + '_trial_' + str(msg['data']),
|
{
|
||||||
'data': tostring(writer.xml).decode(writer.encoding) })
|
"type": "download_gexf",
|
||||||
|
"filename": self.config["name"] + "_trial_" + str(msg["data"]),
|
||||||
|
"data": tostring(writer.xml).decode(writer.encoding),
|
||||||
|
}
|
||||||
|
)
|
||||||
|
|
||||||
elif msg['type'] == 'download_json':
|
elif msg["type"] == "download_json":
|
||||||
G = self.trials[ int(msg['data']) ].history_to_graph()
|
G = self.trials[int(msg["data"])].history_to_graph()
|
||||||
for node in G.nodes():
|
for node in G.nodes():
|
||||||
if 'pos' in G.nodes[node]:
|
if "pos" in G.nodes[node]:
|
||||||
G.nodes[node]['viz'] = {"position": {"x": G.nodes[node]['pos'][0], "y": G.nodes[node]['pos'][1], "z": 0.0}}
|
G.nodes[node]["viz"] = {
|
||||||
del (G.nodes[node]['pos'])
|
"position": {
|
||||||
self.write_message({'type': 'download_json',
|
"x": G.nodes[node]["pos"][0],
|
||||||
'filename': self.config['name'] + '_trial_' + str(msg['data']),
|
"y": G.nodes[node]["pos"][1],
|
||||||
'data': nx.node_link_data(G) })
|
"z": 0.0,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
del G.nodes[node]["pos"]
|
||||||
|
self.write_message(
|
||||||
|
{
|
||||||
|
"type": "download_json",
|
||||||
|
"filename": self.config["name"] + "_trial_" + str(msg["data"]),
|
||||||
|
"data": nx.node_link_data(G),
|
||||||
|
}
|
||||||
|
)
|
||||||
|
|
||||||
else:
|
else:
|
||||||
if self.application.verbose:
|
if self.application.verbose:
|
||||||
logger.info('Unexpected message!')
|
logger.info("Unexpected message!")
|
||||||
|
|
||||||
def update_logging(self):
|
def update_logging(self):
|
||||||
try:
|
try:
|
||||||
if (not self.log_capture_string.closed and self.log_capture_string.getvalue()):
|
if (
|
||||||
for i in range(len(self.log_capture_string.getvalue().split('\n')) - 1):
|
not self.log_capture_string.closed
|
||||||
self.send_log('INFO.' + self.simulation_name, self.log_capture_string.getvalue().split('\n')[i])
|
and self.log_capture_string.getvalue()
|
||||||
|
):
|
||||||
|
for i in range(len(self.log_capture_string.getvalue().split("\n")) - 1):
|
||||||
|
self.send_log(
|
||||||
|
"INFO." + self.simulation_name,
|
||||||
|
self.log_capture_string.getvalue().split("\n")[i],
|
||||||
|
)
|
||||||
self.log_capture_string.truncate(0)
|
self.log_capture_string.truncate(0)
|
||||||
self.log_capture_string.seek(0)
|
self.log_capture_string.seek(0)
|
||||||
finally:
|
finally:
|
||||||
if self.capture_logging:
|
if self.capture_logging:
|
||||||
tornado.ioloop.IOLoop.current().call_later(LOGGING_INTERVAL, self.update_logging)
|
tornado.ioloop.IOLoop.current().call_later(
|
||||||
|
LOGGING_INTERVAL, self.update_logging
|
||||||
|
)
|
||||||
|
|
||||||
def on_close(self):
|
def on_close(self):
|
||||||
if self.application.verbose:
|
if self.application.verbose:
|
||||||
logger.info('Socket closed!')
|
logger.info("Socket closed!")
|
||||||
|
|
||||||
def send_log(self, logger, logging):
|
def send_log(self, logger, logging):
|
||||||
self.write_message({'type': 'log',
|
self.write_message({"type": "log", "logger": logger, "logging": logging})
|
||||||
'logger': logger,
|
|
||||||
'logging': logging})
|
|
||||||
|
|
||||||
@property
|
@property
|
||||||
def simulation_name(self):
|
def simulation_name(self):
|
||||||
return self.config.get('name', 'NoSimulationRunning')
|
return self.config.get("name", "NoSimulationRunning")
|
||||||
|
|
||||||
@run_on_executor
|
@run_on_executor
|
||||||
def nonblocking(self, config):
|
def nonblocking(self, config):
|
||||||
@ -174,28 +218,31 @@ class SocketHandler(tornado.websocket.WebSocketHandler):
|
|||||||
@tornado.gen.coroutine
|
@tornado.gen.coroutine
|
||||||
def run_simulation(self):
|
def run_simulation(self):
|
||||||
# Run simulation and capture logs
|
# Run simulation and capture logs
|
||||||
logger.info('Running simulation!')
|
logger.info("Running simulation!")
|
||||||
if 'visualization_params' in self.config:
|
if "visualization_params" in self.config:
|
||||||
del self.config['visualization_params']
|
del self.config["visualization_params"]
|
||||||
with self.logging(self.simulation_name):
|
with self.logging(self.simulation_name):
|
||||||
try:
|
try:
|
||||||
config = dict(**self.config)
|
config = dict(**self.config)
|
||||||
config['outdir'] = os.path.join(self.application.outdir, config['name'])
|
config["outdir"] = os.path.join(self.application.outdir, config["name"])
|
||||||
config['dump'] = self.application.dump
|
config["dump"] = self.application.dump
|
||||||
self.trials = yield self.nonblocking(config)
|
self.trials = yield self.nonblocking(config)
|
||||||
|
|
||||||
self.write_message({'type': 'trials',
|
self.write_message(
|
||||||
'data': list(trial.name for trial in self.trials) })
|
{
|
||||||
|
"type": "trials",
|
||||||
|
"data": list(trial.name for trial in self.trials),
|
||||||
|
}
|
||||||
|
)
|
||||||
except Exception as ex:
|
except Exception as ex:
|
||||||
error = 'Something went wrong:\n\t{}'.format(ex)
|
error = "Something went wrong:\n\t{}".format(ex)
|
||||||
logging.info(error)
|
logging.info(error)
|
||||||
self.write_message({'type': 'error',
|
self.write_message({"type": "error", "error": error})
|
||||||
'error': error})
|
self.send_log("ERROR." + self.simulation_name, error)
|
||||||
self.send_log('ERROR.' + self.simulation_name, error)
|
|
||||||
|
|
||||||
def get_trial(self, trial):
|
def get_trial(self, trial):
|
||||||
logger.info('Available trials: %s ' % len(self.trials))
|
logger.info("Available trials: %s " % len(self.trials))
|
||||||
logger.info('Ask for : %s' % trial)
|
logger.info("Ask for : %s" % trial)
|
||||||
trial = self.trials[trial]
|
trial = self.trials[trial]
|
||||||
G = trial.history_to_graph()
|
G = trial.history_to_graph()
|
||||||
return nx.node_link_data(G)
|
return nx.node_link_data(G)
|
||||||
@ -221,18 +268,21 @@ class ModularServer(tornado.web.Application):
|
|||||||
"""Main visualization application."""
|
"""Main visualization application."""
|
||||||
|
|
||||||
port = 8001
|
port = 8001
|
||||||
page_handler = (r'/', PageHandler)
|
page_handler = (r"/", PageHandler)
|
||||||
socket_handler = (r'/ws', SocketHandler)
|
socket_handler = (r"/ws", SocketHandler)
|
||||||
static_handler = (r'/(.*)', tornado.web.StaticFileHandler,
|
static_handler = (
|
||||||
{'path': os.path.join(ROOT, 'static')})
|
r"/(.*)",
|
||||||
local_handler = (r'/local/(.*)', tornado.web.StaticFileHandler,
|
tornado.web.StaticFileHandler,
|
||||||
{'path': ''})
|
{"path": os.path.join(ROOT, "static")},
|
||||||
|
)
|
||||||
|
local_handler = (r"/local/(.*)", tornado.web.StaticFileHandler, {"path": ""})
|
||||||
|
|
||||||
handlers = [page_handler, socket_handler, static_handler, local_handler]
|
handlers = [page_handler, socket_handler, static_handler, local_handler]
|
||||||
settings = {'debug': True,
|
settings = {"debug": True, "template_path": ROOT + "/templates"}
|
||||||
'template_path': ROOT + '/templates'}
|
|
||||||
|
|
||||||
def __init__(self, dump=False, outdir='output', name='SOIL', verbose=True, *args, **kwargs):
|
def __init__(
|
||||||
|
self, dump=False, outdir="output", name="SOIL", verbose=True, *args, **kwargs
|
||||||
|
):
|
||||||
|
|
||||||
self.verbose = verbose
|
self.verbose = verbose
|
||||||
self.name = name
|
self.name = name
|
||||||
@ -247,8 +297,8 @@ class ModularServer(tornado.web.Application):
|
|||||||
|
|
||||||
if port is not None:
|
if port is not None:
|
||||||
self.port = port
|
self.port = port
|
||||||
url = 'http://127.0.0.1:{PORT}'.format(PORT=self.port)
|
url = "http://127.0.0.1:{PORT}".format(PORT=self.port)
|
||||||
print('Interface starting at {url}'.format(url=url))
|
print("Interface starting at {url}".format(url=url))
|
||||||
self.listen(self.port)
|
self.listen(self.port)
|
||||||
# webbrowser.open(url)
|
# webbrowser.open(url)
|
||||||
tornado.ioloop.IOLoop.instance().start()
|
tornado.ioloop.IOLoop.instance().start()
|
||||||
@ -263,12 +313,22 @@ def run(*args, **kwargs):
|
|||||||
def main():
|
def main():
|
||||||
import argparse
|
import argparse
|
||||||
|
|
||||||
parser = argparse.ArgumentParser(description='Visualization of a Graph Model')
|
parser = argparse.ArgumentParser(description="Visualization of a Graph Model")
|
||||||
|
|
||||||
parser.add_argument('--name', '-n', nargs=1, default='SOIL', help='name of the simulation')
|
parser.add_argument(
|
||||||
parser.add_argument('--dump', '-d', help='dumping results in folder output', action='store_true')
|
"--name", "-n", nargs=1, default="SOIL", help="name of the simulation"
|
||||||
parser.add_argument('--port', '-p', nargs=1, default=8001, help='port for launching the server')
|
)
|
||||||
parser.add_argument('--verbose', '-v', help='verbose mode', action='store_true')
|
parser.add_argument(
|
||||||
|
"--dump", "-d", help="dumping results in folder output", action="store_true"
|
||||||
|
)
|
||||||
|
parser.add_argument(
|
||||||
|
"--port", "-p", nargs=1, default=8001, help="port for launching the server"
|
||||||
|
)
|
||||||
|
parser.add_argument("--verbose", "-v", help="verbose mode", action="store_true")
|
||||||
args = parser.parse_args()
|
args = parser.parse_args()
|
||||||
|
|
||||||
run(name=args.name, port=(args.port[0] if isinstance(args.port, list) else args.port), verbose=args.verbose)
|
run(
|
||||||
|
name=args.name,
|
||||||
|
port=(args.port[0] if isinstance(args.port, list) else args.port),
|
||||||
|
verbose=args.verbose,
|
||||||
|
)
|
||||||
|
@ -6,11 +6,11 @@ network_params:
|
|||||||
n: 100
|
n: 100
|
||||||
m: 2
|
m: 2
|
||||||
network_agents:
|
network_agents:
|
||||||
- agent_type: ControlModelM2
|
- agent_class: ControlModelM2
|
||||||
weight: 0.1
|
weight: 0.1
|
||||||
state:
|
state:
|
||||||
id: 1
|
id: 1
|
||||||
- agent_type: ControlModelM2
|
- agent_class: ControlModelM2
|
||||||
weight: 0.9
|
weight: 0.9
|
||||||
state:
|
state:
|
||||||
id: 0
|
id: 0
|
||||||
|
@ -4,20 +4,33 @@ from simulator import Simulator
|
|||||||
|
|
||||||
|
|
||||||
def run(simulator, name="SOIL", port=8001, verbose=False):
|
def run(simulator, name="SOIL", port=8001, verbose=False):
|
||||||
server = ModularServer(simulator, name=(name[0] if isinstance(name, list) else name), verbose=verbose)
|
server = ModularServer(
|
||||||
|
simulator, name=(name[0] if isinstance(name, list) else name), verbose=verbose
|
||||||
|
)
|
||||||
server.port = port
|
server.port = port
|
||||||
server.launch()
|
server.launch()
|
||||||
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
if __name__ == "__main__":
|
||||||
|
|
||||||
parser = argparse.ArgumentParser(description='Visualization of a Graph Model')
|
parser = argparse.ArgumentParser(description="Visualization of a Graph Model")
|
||||||
|
|
||||||
parser.add_argument('--name', '-n', nargs=1, default='SOIL', help='name of the simulation')
|
parser.add_argument(
|
||||||
parser.add_argument('--dump', '-d', help='dumping results in folder output', action='store_true')
|
"--name", "-n", nargs=1, default="SOIL", help="name of the simulation"
|
||||||
parser.add_argument('--port', '-p', nargs=1, default=8001, help='port for launching the server')
|
)
|
||||||
parser.add_argument('--verbose', '-v', help='verbose mode', action='store_true')
|
parser.add_argument(
|
||||||
|
"--dump", "-d", help="dumping results in folder output", action="store_true"
|
||||||
|
)
|
||||||
|
parser.add_argument(
|
||||||
|
"--port", "-p", nargs=1, default=8001, help="port for launching the server"
|
||||||
|
)
|
||||||
|
parser.add_argument("--verbose", "-v", help="verbose mode", action="store_true")
|
||||||
args = parser.parse_args()
|
args = parser.parse_args()
|
||||||
|
|
||||||
soil = Simulator(dump=args.dump)
|
soil = Simulator(dump=args.dump)
|
||||||
run(soil, name=args.name, port=(args.port[0] if isinstance(args.port, list) else args.port), verbose=args.verbose)
|
run(
|
||||||
|
soil,
|
||||||
|
name=args.name,
|
||||||
|
port=(args.port[0] if isinstance(args.port, list) else args.port),
|
||||||
|
verbose=args.verbose,
|
||||||
|
)
|
||||||
|
Before Width: | Height: | Size: 1.1 MiB |
Before Width: | Height: | Size: 8.0 MiB |