1
0
mirror of https://github.com/gsi-upm/soil synced 2025-10-19 01:38:31 +00:00

Compare commits

...

35 Commits

Author SHA1 Message Date
J. Fernando Sánchez
d3cee18635 Add seed to cars example 2022-10-20 14:47:28 +02:00
J. Fernando Sánchez
9a7b62e88e Release 0.30.0rc3 2022-10-20 14:12:34 +02:00
J. Fernando Sánchez
c09e480d37 black formatting 2022-10-20 14:12:10 +02:00
J. Fernando Sánchez
b2d48cb4df Add test cases for 'ASK' 2022-10-20 14:10:34 +02:00
J. Fernando Sánchez
a1262edd2a Refactored time
Treating time and conditions as the same entity was getting confusing, and it
added a lot of unnecessary abstraction in a critical part (the scheduler).

The scheduling queue now has the time as a floating number (faster), the agent
id (for ties) and the condition, as well as the agent. The first three
elements (time, id, condition) can be considered as the "key" for the event.

To allow for agent execution to be "randomized" within every step, a new
parameter has been added to the scheduler, which makes it add a random number to
the key in order to change the ordering.

`EventedAgent.received` now checks the messages before returning control to the
user by default.
2022-10-20 12:15:25 +02:00
J. Fernando Sánchez
cbbaf73538 Fix bug EventedEnvironment 2022-10-20 12:07:56 +02:00
J. Fernando Sánchez
2f5e5d0a74 Black formatting 2022-10-18 17:03:40 +02:00
J. Fernando Sánchez
a2fb25c160 Version 0.30.0rc2
* Fix CLI arguments not being used when easy is passed a simulation instance
* Docs for `examples/events_and_messages/cars.py`
2022-10-18 17:02:12 +02:00
J. Fernando Sánchez
5fcf610108 Version 0.30.0rc1 2022-10-18 15:02:05 +02:00
J. Fernando Sánchez
159c9a9077 Add events 2022-10-18 13:11:01 +02:00
J. Fernando Sánchez
3776c4e5c5 Refactor
* Removed references to `set_state`
* Split some functionality from `agents` into separate files (`fsm` and
`network_agents`)
* Rename `neighboring_agents` to `neighbors`
* Delete some spurious functions
2022-10-17 21:49:31 +02:00
J. Fernando Sánchez
880a9f2a1c black formatting 2022-10-17 20:23:57 +02:00
J. Fernando Sánchez
227fdf050e Fix conditionals 2022-10-17 19:29:39 +02:00
J. Fernando Sánchez
5d759d0072 Add conditional time values 2022-10-17 13:58:14 +02:00
J. Fernando Sánchez
77d08fc592 Agent step can be a generator 2022-10-17 08:58:51 +02:00
J. Fernando Sánchez
0efcd24d90 Improve exporters 2022-10-16 21:57:30 +02:00
J. Fernando Sánchez
78833a9e08 Formatted with black 2022-10-16 17:58:19 +02:00
J. Fernando Sánchez
d9947c2c52 WIP: all tests pass
Documentation needs some improvement

The API has been simplified to only allow for ONE topology per
NetworkEnvironment.
This covers the main use case, and simplifies the code.
2022-10-16 17:56:23 +02:00
J. Fernando Sánchez
cd62c23cb9 WIP: all tests pass 2022-10-13 22:43:16 +02:00
J. Fernando Sánchez
f811ee18c5 WIP 2022-10-06 15:49:19 +02:00
J. Fernando Sánchez
0a9c6d8b19 WIP: removed stats 2022-09-16 18:14:16 +02:00
J. Fernando Sánchez
3dc56892c1 WIP: working config 2022-09-15 19:27:17 +02:00
J. Fernando Sánchez
e41dc3dae2 WIP 2022-09-13 18:16:31 +02:00
J. Fernando Sánchez
bbaed636a8 WIP 2022-07-19 17:18:02 +02:00
J. Fernando Sánchez
6f7481769e WIP 2022-07-19 17:17:23 +02:00
J. Fernando Sánchez
1a8313e4f6 WIP 2022-07-19 17:12:41 +02:00
J. Fernando Sánchez
a40aa55b6a Release 0.20.7 2022-07-06 09:23:46 +02:00
J. Fernando Sánchez
50cba751a6 Release 0.20.6 2022-07-05 12:08:34 +02:00
J. Fernando Sánchez
dfb6d13649 version 0.20.5 2022-05-18 16:13:53 +02:00
J. Fernando Sánchez
5559d37e57 version 0.20.4 2022-05-18 15:20:58 +02:00
J. Fernando Sánchez
2116fe6f38 Bug fixes and minor improvements 2022-05-12 16:14:47 +02:00
J. Fernando Sánchez
affeeb9643 Update examples 2022-04-04 16:47:58 +02:00
J. Fernando Sánchez
42ddc02318 CI: delay PyPI check 2022-03-07 14:35:07 +01:00
J. Fernando Sánchez
cab9a3440b Fix typo CI/CD 2022-03-07 13:57:25 +01:00
J. Fernando Sánchez
db505da49c Minor CI change 2022-03-07 13:35:02 +01:00
90 changed files with 5394 additions and 3253 deletions

View File

@@ -37,7 +37,7 @@ push_pypi:
- echo $CI_COMMIT_TAG > soil/VERSION - echo $CI_COMMIT_TAG > soil/VERSION
- pip install twine - pip install twine
- python setup.py sdist bdist_wheel - python setup.py sdist bdist_wheel
- TWINE_PASSWORD=${PYPI_PASSWORD} TWINE_USERNAME={PYPI_USERNAME} python -m twine upload --ignore-existing dist/* - TWINE_PASSWORD=$PYPI_PASSWORD TWINE_USERNAME=$PYPI_USERNAME python -m twine upload dist/*
check_pypi: check_pypi:
only: only:
@@ -48,3 +48,6 @@ check_pypi:
stage: check_published stage: check_published
script: script:
- pip install soil==$CI_COMMIT_TAG - pip install soil==$CI_COMMIT_TAG
# Allow PYPI to update its index before we try to install
when: delayed
start_in: 2 minutes

View File

@@ -3,6 +3,54 @@ All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html). The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## [0.30 UNRELEASED]
### Added
* Simple debugging capabilities in `soil.debugging`, with a custom `pdb.Debugger` subclass that exposes commands to list agents and their status and set breakpoints on states (for FSM agents). Try it with `soil --debug <simulation file>`
* Ability to run
* Ability to
* The `soil.exporters` module to export the results of datacollectors (model.datacollector) into files at the end of trials/simulations
* A modular set of classes for environments/models. Now the ability to configure the agents through an agent definition and a topology through a network configuration is split into two classes (`soil.agents.BaseEnvironment` for agents, `soil.agents.NetworkEnvironment` to add topology).
* FSM agents can now have generators as states. They work similar to normal states, with one caveat. Only `time` values can be yielded, not a state. This is because the state will not change, it will be resumed after the yield, at the appropriate time. The return value *can* be a state, or a `(state, time)` tuple, just like in normal states.
### Changed
* Configuration schema is very different now. Check `soil.config` for more information. We are also using Pydantic for (de)serialization.
* There may be more than one topology/network in the simulation
* Ability
### Removed
* Any `tsih` and `History` integration in the main classes. To record the state of environments/agents, just use a datacollector. In some cases this may be slower or consume more memory than the previous system. However, few cases actually used the full potential of the history, and it came at the cost of unnecessary complexity and worse performance for the majority of cases.
## [0.20.7]
### Changed
* Creating a `time.When` from another `time.When` does not nest them anymore (it returns the argument)
### Fixed
* Bug with time.NEVER/time.INFINITY
## [0.20.6]
### Fixed
* Agents now return `time.INFINITY` when dead, instead of 'inf'
* `soil.__init__` does not re-export built-in time (change in `soil.simulation`. It used to create subtle import conflicts when importing soil.time.
* Parallel simulations were broken because lambdas cannot be pickled properly, which is needed for multiprocessing.
### Changed
* Some internal simulation methods do not accept `*args` anymore, to avoid ambiguity and bugs.
## [0.20.5]
### Changed
* Defaults are now set in the agent __init__, not in the environment. This decouples both classes a bit more, and it is more intuitive
## [0.20.4]
### Added
* Agents can now be given any kwargs, which will be used to set their state
* Environments have a default logger `self.logger` and a log method, just like agents
## [0.20.3]
### Fixed
* Default state values are now deepcopied again.
* Seeds for environments only concatenate the trial id (i.e., a number), to provide repeatable results.
* `Environment.run` now calls `Environment.step`, to allow for easy overloading of the environment step
### Removed
* Datacollectors are not being used for now.
* `time.TimedActivation.step` does not use an `until` parameter anymore.
### Changed
* Simulations now run right up to `until` (open interval)
* Time instants (`time.When`) don't need to be floats anymore. Now we can avoid precision issues with big numbers by using ints.
* Rabbits simulation is more idiomatic (using subclasses)
## [0.20.2] ## [0.20.2]
### Fixed ### Fixed
* CI/CD testing issues * CI/CD testing issues

View File

@@ -5,6 +5,42 @@ Learn how to run your own simulations with our [documentation](http://soilsim.re
Follow our [tutorial](examples/tutorial/soil_tutorial.ipynb) to develop your own agent models. Follow our [tutorial](examples/tutorial/soil_tutorial.ipynb) to develop your own agent models.
# Changes in version 0.3
Version 0.3 came packed with many changes to provide much better integration with MESA.
For a long time, we tried to keep soil backwards-compatible, but it turned out to be a big endeavour and the resulting code was less readable.
This translates to harder maintenance and a worse experience for newcomers.
In the end, we decided to make some breaking changes.
If you have an older Soil simulation, you have two options:
* Update the necessary configuration files and code. You may use the examples in the `examples` folder for reference, as well as the documentation.
* Keep using a previous `soil` version.
## Mesa compatibility
Soil is in the process of becoming fully compatible with MESA.
The idea is to provide a set of modular classes and functions that extend the functionality of mesa, whilst staying compatible.
In the end, it should be possible to add regular mesa agents to a soil simulation, or use a soil agent within a mesa simulation/model.
This is a non-exhaustive list of tasks to achieve compatibility:
- [ ] Integrate `soil.Simulation` with mesa's runners:
- [ ] `soil.Simulation` could mimic/become a `mesa.batchrunner`
- [ ] Integrate `soil.Environment` with `mesa.Model`:
- [x] `Soil.Environment` inherits from `mesa.Model`
- [x] `Soil.Environment` includes a Mesa-like Scheduler (see the `soil.time` module.
- [ ] Allow for `mesa.Model` to be used in a simulation.
- [ ] Integrate `soil.Agent` with `mesa.Agent`:
- [x] Rename agent.id to unique_id?
- [x] mesa agents can be used in soil simulations (see `examples/mesa`)
- [ ] Provide examples
- [ ] Using mesa modules in a soil simulation
- [ ] Using soil modules in a mesa simulation
- [ ] Document the new APIs and usage
## Citation ## Citation
@@ -31,24 +67,6 @@ If you use Soil in your research, don't forget to cite this paper:
``` ```
## Mesa compatibility
Soil is in the process of becoming fully compatible with MESA.
As of this writing,
This is a non-exhaustive list of tasks to achieve compatibility:
* Environments.agents and mesa.Agent.agents are not the same. env is a property, and it only takes into account network and environment agents. Might rename environment_agents to other_agents or sth like that
- [ ] Integrate `soil.Simulation` with mesa's runners:
- [ ] `soil.Simulation` could mimic/become a `mesa.batchrunner`
- [ ] Integrate `soil.Environment` with `mesa.Model`:
- [x] `Soil.Environment` inherits from `mesa.Model`
- [x] `Soil.Environment` includes a Mesa-like Scheduler (see the `soil.time` module.
- [ ] Integrate `soil.Agent` with `mesa.Agent`:
- [x] Rename agent.id to unique_id?
- [x] mesa agents can be used in soil simulations (see `examples/mesa`)
- [ ] Document the new APIs and usage
@Copyright GSI - Universidad Politécnica de Madrid 2017-2021 @Copyright GSI - Universidad Politécnica de Madrid 2017-2021
[![SOIL](logo_gsi.png)](https://www.gsi.upm.es) [![SOIL](logo_gsi.png)](https://www.gsi.upm.es)

View File

@@ -13,7 +13,7 @@ Here's an example (``example.yml``).
This example configuration will run three trials (``num_trials``) of a simulation containing a randomly generated network (``network_params``). This example configuration will run three trials (``num_trials``) of a simulation containing a randomly generated network (``network_params``).
The 100 nodes in the network will be SISaModel agents (``network_agents.agent_type``), which is an agent behavior that is included in Soil. The 100 nodes in the network will be SISaModel agents (``network_agents.agent_class``), which is an agent behavior that is included in Soil.
10% of the agents (``weight=1``) will start in the content state, 10% in the discontent state, and the remaining 80% (``weight=8``) in the neutral state. 10% of the agents (``weight=1``) will start in the content state, 10% in the discontent state, and the remaining 80% (``weight=8``) in the neutral state.
All agents will have access to the environment (``environment_params``), which only contains one variable, ``prob_infected``. All agents will have access to the environment (``environment_params``), which only contains one variable, ``prob_infected``.
The state of the agents will be updated every 2 seconds (``interval``). The state of the agents will be updated every 2 seconds (``interval``).
@@ -88,9 +88,18 @@ For example, the following configuration is equivalent to :code:`nx.complete_gra
Environment Environment
============ ============
The environment is the place where the shared state of the simulation is stored. The environment is the place where the shared state of the simulation is stored.
For instance, the probability of disease outbreak. That means both global parameters, such as the probability of disease outbreak.
The configuration file may specify the initial value of the environment parameters: But it also means other data, such as a map, or a network topology that connects multiple agents.
As a result, it is also typical to add custom functions in an environment that help agents interact with each other and with the state of the simulation.
Last but not least, an environment controls when and how its agents will be executed.
By default, soil environments incorporate a ``soil.time.TimedActivation`` model for agent execution (more on this on the following section).
Soil environments are very similar, and often interchangeable with, mesa models (``mesa.Model``).
A configuration may specify the initial value of the environment parameters:
.. code:: yaml .. code:: yaml
@@ -98,23 +107,33 @@ The configuration file may specify the initial value of the environment paramete
daily_probability_of_earthquake: 0.001 daily_probability_of_earthquake: 0.001
number_of_earthquakes: 0 number_of_earthquakes: 0
All agents have access to the environment parameters. All agents have access to the environment (and its parameters).
In some scenarios, it is useful to have a custom environment, to provide additional methods or to control the way agents update environment state. In some scenarios, it is useful to have a custom environment, to provide additional methods or to control the way agents update environment state.
For example, if our agents play the lottery, the environment could provide a method to decide whether the agent wins, instead of leaving it to the agent. For example, if our agents play the lottery, the environment could provide a method to decide whether the agent wins, instead of leaving it to the agent.
Agents Agents
====== ======
Agents are a way of modelling behavior. Agents are a way of modelling behavior.
Agents can be characterized with two variables: agent type (``agent_type``) and state. Agents can be characterized with two variables: agent type (``agent_class``) and state.
Only one agent is executed at a time (generally, every ``interval`` seconds), and it has access to its state and the environment parameters. The agent type is a ``soil.Agent`` class, which contains the code that encapsulates the behavior of the agent.
The state is a set of variables, which may change during the simulation, and that the code may use to control the behavior.
All agents provide a ``step`` method either explicitly or implicitly (by inheriting it from a superclass), which controls how the agent will behave in each step of the simulation.
When and how agent steps are executed in a simulation depends entirely on the ``environment``.
Most environments will internally use a scheduler (``mesa.time.BaseScheduler``), which controls the activation of agents.
In soil, we generally used the ``soil.time.TimedActivation`` scheduler, which allows agents to specify when their next activation will happen, defaulting to a
When an agent's step is executed (generally, every ``interval`` seconds), the agent has access to its state and the environment.
Through the environment, it can access the network topology and the state of other agents. Through the environment, it can access the network topology and the state of other agents.
There are three three types of agents according to how they are added to the simulation: network agents and environment agent. There are two types of agents according to how they are added to the simulation: network agents and environment agent.
Network Agents Network Agents
############## ##############
Network agents are attached to a node in the topology. Network agents are attached to a node in the topology.
The configuration file allows you to specify how agents will be mapped to topology nodes. The configuration file allows you to specify how agents will be mapped to topology nodes.
@@ -123,17 +142,19 @@ Hence, every node in the network will be associated to an agent of that type.
.. code:: yaml .. code:: yaml
agent_type: SISaModel agent_class: SISaModel
It is also possible to add more than one type of agent to the simulation, and to control the ratio of each type (using the ``weight`` property). It is also possible to add more than one type of agent to the simulation.
To control the ratio of each type (using the ``weight`` property).
For instance, with following configuration, it is five times more likely for a node to be assigned a CounterModel type than a SISaModel type. For instance, with following configuration, it is five times more likely for a node to be assigned a CounterModel type than a SISaModel type.
.. code:: yaml .. code:: yaml
network_agents: network_agents:
- agent_type: SISaModel - agent_class: SISaModel
weight: 1 weight: 1
- agent_type: CounterModel - agent_class: CounterModel
weight: 5 weight: 5
The third option is to specify the type of agent on the node itself, e.g.: The third option is to specify the type of agent on the node itself, e.g.:
@@ -144,10 +165,10 @@ The third option is to specify the type of agent on the node itself, e.g.:
topology: topology:
nodes: nodes:
- id: first - id: first
agent_type: BaseAgent agent_class: BaseAgent
states: states:
first: first:
agent_type: SISaModel agent_class: SISaModel
This would also work with a randomly generated network: This would also work with a randomly generated network:
@@ -158,9 +179,9 @@ This would also work with a randomly generated network:
network: network:
generator: complete generator: complete
n: 5 n: 5
agent_type: BaseAgent agent_class: BaseAgent
states: states:
- agent_type: SISaModel - agent_class: SISaModel
@@ -171,11 +192,11 @@ e.g., to populate the network with SISaModel, roughly 10% of them with a discont
.. code:: yaml .. code:: yaml
network_agents: network_agents:
- agent_type: SISaModel - agent_class: SISaModel
weight: 9 weight: 9
state: state:
id: neutral id: neutral
- agent_type: SISaModel - agent_class: SISaModel
weight: 1 weight: 1
state: state:
id: discontent id: discontent
@@ -185,7 +206,7 @@ For instance, to add a state for the two nodes in this configuration:
.. code:: yaml .. code:: yaml
agent_type: SISaModel agent_class: SISaModel
network: network:
generator: complete_graph generator: complete_graph
n: 2 n: 2
@@ -210,10 +231,10 @@ These agents are programmed in much the same way as network agents, the only dif
.. code:: .. code::
environment_agents: environment_agents:
- agent_type: MyAgent - agent_class: MyAgent
state: state:
mood: happy mood: happy
- agent_type: DummyAgent - agent_class: DummyAgent
You may use environment agents to model events that a normal agent cannot control, such as natural disasters or chance. You may use environment agents to model events that a normal agent cannot control, such as natural disasters or chance.

View File

@@ -8,15 +8,15 @@ network_params:
n: 100 n: 100
m: 2 m: 2
network_agents: network_agents:
- agent_type: SISaModel - agent_class: SISaModel
weight: 1 weight: 1
state: state:
id: content id: content
- agent_type: SISaModel - agent_class: SISaModel
weight: 1 weight: 1
state: state:
id: discontent id: discontent
- agent_type: SISaModel - agent_class: SISaModel
weight: 8 weight: 8
state: state:
id: neutral id: neutral

View File

@@ -3,11 +3,11 @@ name: quickstart
num_trials: 1 num_trials: 1
max_time: 1000 max_time: 1000
network_agents: network_agents:
- agent_type: SISaModel - agent_class: SISaModel
state: state:
id: neutral id: neutral
weight: 1 weight: 1
- agent_type: SISaModel - agent_class: SISaModel
state: state:
id: content id: content
weight: 2 weight: 2

View File

@@ -1 +1 @@
ipython==7.23 ipython>=7.31.1

12
docs/soil-vs.rst Normal file
View File

@@ -0,0 +1,12 @@
### MESA
Starting with version 0.3, Soil has been redesigned to complement Mesa, while remaining compatible with it.
That means that every component in Soil (i.e., Models, Environments, etc.) can be mixed with existing mesa components.
In fact, there are examples that show how that integration may be used, in the `examples/mesa` folder in the repository.
Here are some reasons to use Soil instead of plain mesa:
- Less boilerplate for common scenarios (by some definitions of common)
- Functions to automatically populate a topology with an agent distribution (i.e., different ratios of agent class and state)
- The `soil.Simulation` class allows you to run multiple instances of the same experiment (i.e., multiple trials with the same parameters but a different randomness seed)
- Reporting functions that aggregate multiple

View File

@@ -211,11 +211,11 @@ nodes in that network. Notice how node 0 is the only one with a TV.
sim = soil.Simulation(topology=G, sim = soil.Simulation(topology=G,
num_trials=1, num_trials=1,
max_time=MAX_TIME, max_time=MAX_TIME,
environment_agents=[{'agent_type': NewsEnvironmentAgent, environment_agents=[{'agent_class': NewsEnvironmentAgent,
'state': { 'state': {
'event_time': EVENT_TIME 'event_time': EVENT_TIME
}}], }}],
network_agents=[{'agent_type': NewsSpread, network_agents=[{'agent_class': NewsSpread,
'weight': 1}], 'weight': 1}],
states={0: {'has_tv': True}}, states={0: {'has_tv': True}},
default_state={'has_tv': False}, default_state={'has_tv': False},
@@ -285,14 +285,14 @@ For this demo, we will use a python dictionary:
}, },
'network_agents': [ 'network_agents': [
{ {
'agent_type': NewsSpread, 'agent_class': NewsSpread,
'weight': 1, 'weight': 1,
'state': { 'state': {
'has_tv': False 'has_tv': False
} }
}, },
{ {
'agent_type': NewsSpread, 'agent_class': NewsSpread,
'weight': 2, 'weight': 2,
'state': { 'state': {
'has_tv': True 'has_tv': True
@@ -300,7 +300,7 @@ For this demo, we will use a python dictionary:
} }
], ],
'environment_agents':[ 'environment_agents':[
{'agent_type': NewsEnvironmentAgent, {'agent_class': NewsEnvironmentAgent,
'state': { 'state': {
'event_time': 10 'event_time': 10
} }

View File

@@ -98,11 +98,11 @@
"max_time: 30\r\n", "max_time: 30\r\n",
"name: Sim_all_dumb\r\n", "name: Sim_all_dumb\r\n",
"network_agents:\r\n", "network_agents:\r\n",
"- agent_type: DumbViewer\r\n", "- agent_class: DumbViewer\r\n",
" state:\r\n", " state:\r\n",
" has_tv: false\r\n", " has_tv: false\r\n",
" weight: 1\r\n", " weight: 1\r\n",
"- agent_type: DumbViewer\r\n", "- agent_class: DumbViewer\r\n",
" state:\r\n", " state:\r\n",
" has_tv: true\r\n", " has_tv: true\r\n",
" weight: 1\r\n", " weight: 1\r\n",
@@ -122,19 +122,19 @@
"max_time: 30\r\n", "max_time: 30\r\n",
"name: Sim_half_herd\r\n", "name: Sim_half_herd\r\n",
"network_agents:\r\n", "network_agents:\r\n",
"- agent_type: DumbViewer\r\n", "- agent_class: DumbViewer\r\n",
" state:\r\n", " state:\r\n",
" has_tv: false\r\n", " has_tv: false\r\n",
" weight: 1\r\n", " weight: 1\r\n",
"- agent_type: DumbViewer\r\n", "- agent_class: DumbViewer\r\n",
" state:\r\n", " state:\r\n",
" has_tv: true\r\n", " has_tv: true\r\n",
" weight: 1\r\n", " weight: 1\r\n",
"- agent_type: HerdViewer\r\n", "- agent_class: HerdViewer\r\n",
" state:\r\n", " state:\r\n",
" has_tv: false\r\n", " has_tv: false\r\n",
" weight: 1\r\n", " weight: 1\r\n",
"- agent_type: HerdViewer\r\n", "- agent_class: HerdViewer\r\n",
" state:\r\n", " state:\r\n",
" has_tv: true\r\n", " has_tv: true\r\n",
" weight: 1\r\n", " weight: 1\r\n",
@@ -154,12 +154,12 @@
"max_time: 30\r\n", "max_time: 30\r\n",
"name: Sim_all_herd\r\n", "name: Sim_all_herd\r\n",
"network_agents:\r\n", "network_agents:\r\n",
"- agent_type: HerdViewer\r\n", "- agent_class: HerdViewer\r\n",
" state:\r\n", " state:\r\n",
" has_tv: true\r\n", " has_tv: true\r\n",
" id: neutral\r\n", " id: neutral\r\n",
" weight: 1\r\n", " weight: 1\r\n",
"- agent_type: HerdViewer\r\n", "- agent_class: HerdViewer\r\n",
" state:\r\n", " state:\r\n",
" has_tv: true\r\n", " has_tv: true\r\n",
" id: neutral\r\n", " id: neutral\r\n",
@@ -181,12 +181,12 @@
"max_time: 30\r\n", "max_time: 30\r\n",
"name: Sim_wise_herd\r\n", "name: Sim_wise_herd\r\n",
"network_agents:\r\n", "network_agents:\r\n",
"- agent_type: HerdViewer\r\n", "- agent_class: HerdViewer\r\n",
" state:\r\n", " state:\r\n",
" has_tv: true\r\n", " has_tv: true\r\n",
" id: neutral\r\n", " id: neutral\r\n",
" weight: 1\r\n", " weight: 1\r\n",
"- agent_type: WiseViewer\r\n", "- agent_class: WiseViewer\r\n",
" state:\r\n", " state:\r\n",
" has_tv: true\r\n", " has_tv: true\r\n",
" weight: 1\r\n", " weight: 1\r\n",
@@ -207,12 +207,12 @@
"max_time: 30\r\n", "max_time: 30\r\n",
"name: Sim_all_wise\r\n", "name: Sim_all_wise\r\n",
"network_agents:\r\n", "network_agents:\r\n",
"- agent_type: WiseViewer\r\n", "- agent_class: WiseViewer\r\n",
" state:\r\n", " state:\r\n",
" has_tv: true\r\n", " has_tv: true\r\n",
" id: neutral\r\n", " id: neutral\r\n",
" weight: 1\r\n", " weight: 1\r\n",
"- agent_type: WiseViewer\r\n", "- agent_class: WiseViewer\r\n",
" state:\r\n", " state:\r\n",
" has_tv: true\r\n", " has_tv: true\r\n",
" weight: 1\r\n", " weight: 1\r\n",

View File

@@ -141,10 +141,10 @@
" 'load_module': 'newsspread',\n", " 'load_module': 'newsspread',\n",
" 'max_time': 30,\n", " 'max_time': 30,\n",
" 'name': 'Sim_all_dumb',\n", " 'name': 'Sim_all_dumb',\n",
" 'network_agents': [{'agent_type': 'DumbViewer',\n", " 'network_agents': [{'agent_class': 'DumbViewer',\n",
" 'state': {'has_tv': False},\n", " 'state': {'has_tv': False},\n",
" 'weight': 1},\n", " 'weight': 1},\n",
" {'agent_type': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n", " {'agent_class': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n",
" 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n", " 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n",
" 'num_trials': 50,\n", " 'num_trials': 50,\n",
" 'seed': 'None',\n", " 'seed': 'None',\n",
@@ -1758,10 +1758,10 @@
" 'load_module': 'newsspread',\n", " 'load_module': 'newsspread',\n",
" 'max_time': 30,\n", " 'max_time': 30,\n",
" 'name': 'Sim_all_dumb',\n", " 'name': 'Sim_all_dumb',\n",
" 'network_agents': [{'agent_type': 'DumbViewer',\n", " 'network_agents': [{'agent_class': 'DumbViewer',\n",
" 'state': {'has_tv': False},\n", " 'state': {'has_tv': False},\n",
" 'weight': 1},\n", " 'weight': 1},\n",
" {'agent_type': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n", " {'agent_class': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n",
" 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n", " 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n",
" 'num_trials': 50,\n", " 'num_trials': 50,\n",
" 'seed': 'None',\n", " 'seed': 'None',\n",
@@ -3363,10 +3363,10 @@
" 'load_module': 'newsspread',\n", " 'load_module': 'newsspread',\n",
" 'max_time': 30,\n", " 'max_time': 30,\n",
" 'name': 'Sim_all_dumb',\n", " 'name': 'Sim_all_dumb',\n",
" 'network_agents': [{'agent_type': 'DumbViewer',\n", " 'network_agents': [{'agent_class': 'DumbViewer',\n",
" 'state': {'has_tv': False},\n", " 'state': {'has_tv': False},\n",
" 'weight': 1},\n", " 'weight': 1},\n",
" {'agent_type': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n", " {'agent_class': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n",
" 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n", " 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n",
" 'num_trials': 50,\n", " 'num_trials': 50,\n",
" 'seed': 'None',\n", " 'seed': 'None',\n",
@@ -4977,10 +4977,10 @@
" 'load_module': 'newsspread',\n", " 'load_module': 'newsspread',\n",
" 'max_time': 30,\n", " 'max_time': 30,\n",
" 'name': 'Sim_all_dumb',\n", " 'name': 'Sim_all_dumb',\n",
" 'network_agents': [{'agent_type': 'DumbViewer',\n", " 'network_agents': [{'agent_class': 'DumbViewer',\n",
" 'state': {'has_tv': False},\n", " 'state': {'has_tv': False},\n",
" 'weight': 1},\n", " 'weight': 1},\n",
" {'agent_type': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n", " {'agent_class': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n",
" 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n", " 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n",
" 'num_trials': 50,\n", " 'num_trials': 50,\n",
" 'seed': 'None',\n", " 'seed': 'None',\n",
@@ -6591,10 +6591,10 @@
" 'load_module': 'newsspread',\n", " 'load_module': 'newsspread',\n",
" 'max_time': 30,\n", " 'max_time': 30,\n",
" 'name': 'Sim_all_dumb',\n", " 'name': 'Sim_all_dumb',\n",
" 'network_agents': [{'agent_type': 'DumbViewer',\n", " 'network_agents': [{'agent_class': 'DumbViewer',\n",
" 'state': {'has_tv': False},\n", " 'state': {'has_tv': False},\n",
" 'weight': 1},\n", " 'weight': 1},\n",
" {'agent_type': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n", " {'agent_class': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n",
" 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n", " 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n",
" 'num_trials': 50,\n", " 'num_trials': 50,\n",
" 'seed': 'None',\n", " 'seed': 'None',\n",
@@ -8211,10 +8211,10 @@
" 'load_module': 'newsspread',\n", " 'load_module': 'newsspread',\n",
" 'max_time': 30,\n", " 'max_time': 30,\n",
" 'name': 'Sim_all_dumb',\n", " 'name': 'Sim_all_dumb',\n",
" 'network_agents': [{'agent_type': 'DumbViewer',\n", " 'network_agents': [{'agent_class': 'DumbViewer',\n",
" 'state': {'has_tv': False},\n", " 'state': {'has_tv': False},\n",
" 'weight': 1},\n", " 'weight': 1},\n",
" {'agent_type': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n", " {'agent_class': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n",
" 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n", " 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n",
" 'num_trials': 50,\n", " 'num_trials': 50,\n",
" 'seed': 'None',\n", " 'seed': 'None',\n",
@@ -9828,10 +9828,10 @@
" 'load_module': 'newsspread',\n", " 'load_module': 'newsspread',\n",
" 'max_time': 30,\n", " 'max_time': 30,\n",
" 'name': 'Sim_all_dumb',\n", " 'name': 'Sim_all_dumb',\n",
" 'network_agents': [{'agent_type': 'DumbViewer',\n", " 'network_agents': [{'agent_class': 'DumbViewer',\n",
" 'state': {'has_tv': False},\n", " 'state': {'has_tv': False},\n",
" 'weight': 1},\n", " 'weight': 1},\n",
" {'agent_type': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n", " {'agent_class': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n",
" 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n", " 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n",
" 'num_trials': 50,\n", " 'num_trials': 50,\n",
" 'seed': 'None',\n", " 'seed': 'None',\n",
@@ -11448,10 +11448,10 @@
" 'load_module': 'newsspread',\n", " 'load_module': 'newsspread',\n",
" 'max_time': 30,\n", " 'max_time': 30,\n",
" 'name': 'Sim_all_dumb',\n", " 'name': 'Sim_all_dumb',\n",
" 'network_agents': [{'agent_type': 'DumbViewer',\n", " 'network_agents': [{'agent_class': 'DumbViewer',\n",
" 'state': {'has_tv': False},\n", " 'state': {'has_tv': False},\n",
" 'weight': 1},\n", " 'weight': 1},\n",
" {'agent_type': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n", " {'agent_class': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n",
" 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n", " 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n",
" 'num_trials': 50,\n", " 'num_trials': 50,\n",
" 'seed': 'None',\n", " 'seed': 'None',\n",
@@ -13062,10 +13062,10 @@
" 'load_module': 'newsspread',\n", " 'load_module': 'newsspread',\n",
" 'max_time': 30,\n", " 'max_time': 30,\n",
" 'name': 'Sim_all_dumb',\n", " 'name': 'Sim_all_dumb',\n",
" 'network_agents': [{'agent_type': 'DumbViewer',\n", " 'network_agents': [{'agent_class': 'DumbViewer',\n",
" 'state': {'has_tv': False},\n", " 'state': {'has_tv': False},\n",
" 'weight': 1},\n", " 'weight': 1},\n",
" {'agent_type': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n", " {'agent_class': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n",
" 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n", " 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n",
" 'num_trials': 50,\n", " 'num_trials': 50,\n",
" 'seed': 'None',\n", " 'seed': 'None',\n",
@@ -14679,10 +14679,10 @@
" 'load_module': 'newsspread',\n", " 'load_module': 'newsspread',\n",
" 'max_time': 30,\n", " 'max_time': 30,\n",
" 'name': 'Sim_all_dumb',\n", " 'name': 'Sim_all_dumb',\n",
" 'network_agents': [{'agent_type': 'DumbViewer',\n", " 'network_agents': [{'agent_class': 'DumbViewer',\n",
" 'state': {'has_tv': False},\n", " 'state': {'has_tv': False},\n",
" 'weight': 1},\n", " 'weight': 1},\n",
" {'agent_type': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n", " {'agent_class': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n",
" 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n", " 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n",
" 'num_trials': 50,\n", " 'num_trials': 50,\n",
" 'seed': 'None',\n", " 'seed': 'None',\n",
@@ -16296,10 +16296,10 @@
" 'load_module': 'newsspread',\n", " 'load_module': 'newsspread',\n",
" 'max_time': 30,\n", " 'max_time': 30,\n",
" 'name': 'Sim_all_dumb',\n", " 'name': 'Sim_all_dumb',\n",
" 'network_agents': [{'agent_type': 'DumbViewer',\n", " 'network_agents': [{'agent_class': 'DumbViewer',\n",
" 'state': {'has_tv': False},\n", " 'state': {'has_tv': False},\n",
" 'weight': 1},\n", " 'weight': 1},\n",
" {'agent_type': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n", " {'agent_class': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n",
" 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n", " 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n",
" 'num_trials': 50,\n", " 'num_trials': 50,\n",
" 'seed': 'None',\n", " 'seed': 'None',\n",
@@ -17916,10 +17916,10 @@
" 'load_module': 'newsspread',\n", " 'load_module': 'newsspread',\n",
" 'max_time': 30,\n", " 'max_time': 30,\n",
" 'name': 'Sim_all_dumb',\n", " 'name': 'Sim_all_dumb',\n",
" 'network_agents': [{'agent_type': 'DumbViewer',\n", " 'network_agents': [{'agent_class': 'DumbViewer',\n",
" 'state': {'has_tv': False},\n", " 'state': {'has_tv': False},\n",
" 'weight': 1},\n", " 'weight': 1},\n",
" {'agent_type': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n", " {'agent_class': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n",
" 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n", " 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n",
" 'num_trials': 50,\n", " 'num_trials': 50,\n",
" 'seed': 'None',\n", " 'seed': 'None',\n",
@@ -19521,10 +19521,10 @@
" 'load_module': 'newsspread',\n", " 'load_module': 'newsspread',\n",
" 'max_time': 30,\n", " 'max_time': 30,\n",
" 'name': 'Sim_all_dumb',\n", " 'name': 'Sim_all_dumb',\n",
" 'network_agents': [{'agent_type': 'DumbViewer',\n", " 'network_agents': [{'agent_class': 'DumbViewer',\n",
" 'state': {'has_tv': False},\n", " 'state': {'has_tv': False},\n",
" 'weight': 1},\n", " 'weight': 1},\n",
" {'agent_type': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n", " {'agent_class': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n",
" 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n", " 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n",
" 'num_trials': 50,\n", " 'num_trials': 50,\n",
" 'seed': 'None',\n", " 'seed': 'None',\n",
@@ -21144,10 +21144,10 @@
" 'load_module': 'newsspread',\n", " 'load_module': 'newsspread',\n",
" 'max_time': 30,\n", " 'max_time': 30,\n",
" 'name': 'Sim_all_dumb',\n", " 'name': 'Sim_all_dumb',\n",
" 'network_agents': [{'agent_type': 'DumbViewer',\n", " 'network_agents': [{'agent_class': 'DumbViewer',\n",
" 'state': {'has_tv': False},\n", " 'state': {'has_tv': False},\n",
" 'weight': 1},\n", " 'weight': 1},\n",
" {'agent_type': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n", " {'agent_class': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n",
" 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n", " 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n",
" 'num_trials': 50,\n", " 'num_trials': 50,\n",
" 'seed': 'None',\n", " 'seed': 'None',\n",
@@ -22767,10 +22767,10 @@
" 'load_module': 'newsspread',\n", " 'load_module': 'newsspread',\n",
" 'max_time': 30,\n", " 'max_time': 30,\n",
" 'name': 'Sim_all_dumb',\n", " 'name': 'Sim_all_dumb',\n",
" 'network_agents': [{'agent_type': 'DumbViewer',\n", " 'network_agents': [{'agent_class': 'DumbViewer',\n",
" 'state': {'has_tv': False},\n", " 'state': {'has_tv': False},\n",
" 'weight': 1},\n", " 'weight': 1},\n",
" {'agent_type': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n", " {'agent_class': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n",
" 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n", " 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n",
" 'num_trials': 50,\n", " 'num_trials': 50,\n",
" 'seed': 'None',\n", " 'seed': 'None',\n",
@@ -24375,10 +24375,10 @@
" 'load_module': 'newsspread',\n", " 'load_module': 'newsspread',\n",
" 'max_time': 30,\n", " 'max_time': 30,\n",
" 'name': 'Sim_all_dumb',\n", " 'name': 'Sim_all_dumb',\n",
" 'network_agents': [{'agent_type': 'DumbViewer',\n", " 'network_agents': [{'agent_class': 'DumbViewer',\n",
" 'state': {'has_tv': False},\n", " 'state': {'has_tv': False},\n",
" 'weight': 1},\n", " 'weight': 1},\n",
" {'agent_type': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n", " {'agent_class': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n",
" 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n", " 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n",
" 'num_trials': 50,\n", " 'num_trials': 50,\n",
" 'seed': 'None',\n", " 'seed': 'None',\n",
@@ -25992,10 +25992,10 @@
" 'load_module': 'newsspread',\n", " 'load_module': 'newsspread',\n",
" 'max_time': 30,\n", " 'max_time': 30,\n",
" 'name': 'Sim_all_dumb',\n", " 'name': 'Sim_all_dumb',\n",
" 'network_agents': [{'agent_type': 'DumbViewer',\n", " 'network_agents': [{'agent_class': 'DumbViewer',\n",
" 'state': {'has_tv': False},\n", " 'state': {'has_tv': False},\n",
" 'weight': 1},\n", " 'weight': 1},\n",
" {'agent_type': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n", " {'agent_class': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n",
" 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n", " 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n",
" 'num_trials': 50,\n", " 'num_trials': 50,\n",
" 'seed': 'None',\n", " 'seed': 'None',\n",
@@ -27603,10 +27603,10 @@
" 'load_module': 'newsspread',\n", " 'load_module': 'newsspread',\n",
" 'max_time': 30,\n", " 'max_time': 30,\n",
" 'name': 'Sim_all_dumb',\n", " 'name': 'Sim_all_dumb',\n",
" 'network_agents': [{'agent_type': 'DumbViewer',\n", " 'network_agents': [{'agent_class': 'DumbViewer',\n",
" 'state': {'has_tv': False},\n", " 'state': {'has_tv': False},\n",
" 'weight': 1},\n", " 'weight': 1},\n",
" {'agent_type': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n", " {'agent_class': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n",
" 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n", " 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n",
" 'num_trials': 50,\n", " 'num_trials': 50,\n",
" 'seed': 'None',\n", " 'seed': 'None',\n",
@@ -29220,10 +29220,10 @@
" 'load_module': 'newsspread',\n", " 'load_module': 'newsspread',\n",
" 'max_time': 30,\n", " 'max_time': 30,\n",
" 'name': 'Sim_all_dumb',\n", " 'name': 'Sim_all_dumb',\n",
" 'network_agents': [{'agent_type': 'DumbViewer',\n", " 'network_agents': [{'agent_class': 'DumbViewer',\n",
" 'state': {'has_tv': False},\n", " 'state': {'has_tv': False},\n",
" 'weight': 1},\n", " 'weight': 1},\n",
" {'agent_type': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n", " {'agent_class': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n",
" 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n", " 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n",
" 'num_trials': 50,\n", " 'num_trials': 50,\n",
" 'seed': 'None',\n", " 'seed': 'None',\n",
@@ -30819,10 +30819,10 @@
" 'load_module': 'newsspread',\n", " 'load_module': 'newsspread',\n",
" 'max_time': 30,\n", " 'max_time': 30,\n",
" 'name': 'Sim_all_dumb',\n", " 'name': 'Sim_all_dumb',\n",
" 'network_agents': [{'agent_type': 'DumbViewer',\n", " 'network_agents': [{'agent_class': 'DumbViewer',\n",
" 'state': {'has_tv': False},\n", " 'state': {'has_tv': False},\n",
" 'weight': 1},\n", " 'weight': 1},\n",
" {'agent_type': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n", " {'agent_class': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n",
" 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n", " 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n",
" 'num_trials': 50,\n", " 'num_trials': 50,\n",
" 'seed': 'None',\n", " 'seed': 'None',\n",
@@ -32439,10 +32439,10 @@
" 'load_module': 'newsspread',\n", " 'load_module': 'newsspread',\n",
" 'max_time': 30,\n", " 'max_time': 30,\n",
" 'name': 'Sim_all_dumb',\n", " 'name': 'Sim_all_dumb',\n",
" 'network_agents': [{'agent_type': 'DumbViewer',\n", " 'network_agents': [{'agent_class': 'DumbViewer',\n",
" 'state': {'has_tv': False},\n", " 'state': {'has_tv': False},\n",
" 'weight': 1},\n", " 'weight': 1},\n",
" {'agent_type': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n", " {'agent_class': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n",
" 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n", " 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n",
" 'num_trials': 50,\n", " 'num_trials': 50,\n",
" 'seed': 'None',\n", " 'seed': 'None',\n",
@@ -34056,10 +34056,10 @@
" 'load_module': 'newsspread',\n", " 'load_module': 'newsspread',\n",
" 'max_time': 30,\n", " 'max_time': 30,\n",
" 'name': 'Sim_all_dumb',\n", " 'name': 'Sim_all_dumb',\n",
" 'network_agents': [{'agent_type': 'DumbViewer',\n", " 'network_agents': [{'agent_class': 'DumbViewer',\n",
" 'state': {'has_tv': False},\n", " 'state': {'has_tv': False},\n",
" 'weight': 1},\n", " 'weight': 1},\n",
" {'agent_type': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n", " {'agent_class': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n",
" 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n", " 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n",
" 'num_trials': 50,\n", " 'num_trials': 50,\n",
" 'seed': 'None',\n", " 'seed': 'None',\n",
@@ -35676,10 +35676,10 @@
" 'load_module': 'newsspread',\n", " 'load_module': 'newsspread',\n",
" 'max_time': 30,\n", " 'max_time': 30,\n",
" 'name': 'Sim_all_dumb',\n", " 'name': 'Sim_all_dumb',\n",
" 'network_agents': [{'agent_type': 'DumbViewer',\n", " 'network_agents': [{'agent_class': 'DumbViewer',\n",
" 'state': {'has_tv': False},\n", " 'state': {'has_tv': False},\n",
" 'weight': 1},\n", " 'weight': 1},\n",
" {'agent_type': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n", " {'agent_class': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n",
" 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n", " 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n",
" 'num_trials': 50,\n", " 'num_trials': 50,\n",
" 'seed': 'None',\n", " 'seed': 'None',\n",
@@ -37293,10 +37293,10 @@
" 'load_module': 'newsspread',\n", " 'load_module': 'newsspread',\n",
" 'max_time': 30,\n", " 'max_time': 30,\n",
" 'name': 'Sim_all_dumb',\n", " 'name': 'Sim_all_dumb',\n",
" 'network_agents': [{'agent_type': 'DumbViewer',\n", " 'network_agents': [{'agent_class': 'DumbViewer',\n",
" 'state': {'has_tv': False},\n", " 'state': {'has_tv': False},\n",
" 'weight': 1},\n", " 'weight': 1},\n",
" {'agent_type': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n", " {'agent_class': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n",
" 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n", " 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n",
" 'num_trials': 50,\n", " 'num_trials': 50,\n",
" 'seed': 'None',\n", " 'seed': 'None',\n",
@@ -38913,10 +38913,10 @@
" 'load_module': 'newsspread',\n", " 'load_module': 'newsspread',\n",
" 'max_time': 30,\n", " 'max_time': 30,\n",
" 'name': 'Sim_all_dumb',\n", " 'name': 'Sim_all_dumb',\n",
" 'network_agents': [{'agent_type': 'DumbViewer',\n", " 'network_agents': [{'agent_class': 'DumbViewer',\n",
" 'state': {'has_tv': False},\n", " 'state': {'has_tv': False},\n",
" 'weight': 1},\n", " 'weight': 1},\n",
" {'agent_type': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n", " {'agent_class': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n",
" 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n", " 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n",
" 'num_trials': 50,\n", " 'num_trials': 50,\n",
" 'seed': 'None',\n", " 'seed': 'None',\n",
@@ -40518,10 +40518,10 @@
" 'load_module': 'newsspread',\n", " 'load_module': 'newsspread',\n",
" 'max_time': 30,\n", " 'max_time': 30,\n",
" 'name': 'Sim_all_dumb',\n", " 'name': 'Sim_all_dumb',\n",
" 'network_agents': [{'agent_type': 'DumbViewer',\n", " 'network_agents': [{'agent_class': 'DumbViewer',\n",
" 'state': {'has_tv': False},\n", " 'state': {'has_tv': False},\n",
" 'weight': 1},\n", " 'weight': 1},\n",
" {'agent_type': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n", " {'agent_class': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n",
" 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n", " 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n",
" 'num_trials': 50,\n", " 'num_trials': 50,\n",
" 'seed': 'None',\n", " 'seed': 'None',\n",
@@ -42129,10 +42129,10 @@
" 'load_module': 'newsspread',\n", " 'load_module': 'newsspread',\n",
" 'max_time': 30,\n", " 'max_time': 30,\n",
" 'name': 'Sim_all_dumb',\n", " 'name': 'Sim_all_dumb',\n",
" 'network_agents': [{'agent_type': 'DumbViewer',\n", " 'network_agents': [{'agent_class': 'DumbViewer',\n",
" 'state': {'has_tv': False},\n", " 'state': {'has_tv': False},\n",
" 'weight': 1},\n", " 'weight': 1},\n",
" {'agent_type': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n", " {'agent_class': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n",
" 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n", " 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n",
" 'num_trials': 50,\n", " 'num_trials': 50,\n",
" 'seed': 'None',\n", " 'seed': 'None',\n",
@@ -43746,10 +43746,10 @@
" 'load_module': 'newsspread',\n", " 'load_module': 'newsspread',\n",
" 'max_time': 30,\n", " 'max_time': 30,\n",
" 'name': 'Sim_all_dumb',\n", " 'name': 'Sim_all_dumb',\n",
" 'network_agents': [{'agent_type': 'DumbViewer',\n", " 'network_agents': [{'agent_class': 'DumbViewer',\n",
" 'state': {'has_tv': False},\n", " 'state': {'has_tv': False},\n",
" 'weight': 1},\n", " 'weight': 1},\n",
" {'agent_type': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n", " {'agent_class': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n",
" 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n", " 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n",
" 'num_trials': 50,\n", " 'num_trials': 50,\n",
" 'seed': 'None',\n", " 'seed': 'None',\n",
@@ -45357,10 +45357,10 @@
" 'load_module': 'newsspread',\n", " 'load_module': 'newsspread',\n",
" 'max_time': 30,\n", " 'max_time': 30,\n",
" 'name': 'Sim_all_dumb',\n", " 'name': 'Sim_all_dumb',\n",
" 'network_agents': [{'agent_type': 'DumbViewer',\n", " 'network_agents': [{'agent_class': 'DumbViewer',\n",
" 'state': {'has_tv': False},\n", " 'state': {'has_tv': False},\n",
" 'weight': 1},\n", " 'weight': 1},\n",
" {'agent_type': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n", " {'agent_class': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n",
" 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n", " 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n",
" 'num_trials': 50,\n", " 'num_trials': 50,\n",
" 'seed': 'None',\n", " 'seed': 'None',\n",
@@ -46974,10 +46974,10 @@
" 'load_module': 'newsspread',\n", " 'load_module': 'newsspread',\n",
" 'max_time': 30,\n", " 'max_time': 30,\n",
" 'name': 'Sim_all_dumb',\n", " 'name': 'Sim_all_dumb',\n",
" 'network_agents': [{'agent_type': 'DumbViewer',\n", " 'network_agents': [{'agent_class': 'DumbViewer',\n",
" 'state': {'has_tv': False},\n", " 'state': {'has_tv': False},\n",
" 'weight': 1},\n", " 'weight': 1},\n",
" {'agent_type': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n", " {'agent_class': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n",
" 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n", " 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n",
" 'num_trials': 50,\n", " 'num_trials': 50,\n",
" 'seed': 'None',\n", " 'seed': 'None',\n",
@@ -48588,10 +48588,10 @@
" 'load_module': 'newsspread',\n", " 'load_module': 'newsspread',\n",
" 'max_time': 30,\n", " 'max_time': 30,\n",
" 'name': 'Sim_all_dumb',\n", " 'name': 'Sim_all_dumb',\n",
" 'network_agents': [{'agent_type': 'DumbViewer',\n", " 'network_agents': [{'agent_class': 'DumbViewer',\n",
" 'state': {'has_tv': False},\n", " 'state': {'has_tv': False},\n",
" 'weight': 1},\n", " 'weight': 1},\n",
" {'agent_type': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n", " {'agent_class': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n",
" 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n", " 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n",
" 'num_trials': 50,\n", " 'num_trials': 50,\n",
" 'seed': 'None',\n", " 'seed': 'None',\n",
@@ -50202,10 +50202,10 @@
" 'load_module': 'newsspread',\n", " 'load_module': 'newsspread',\n",
" 'max_time': 30,\n", " 'max_time': 30,\n",
" 'name': 'Sim_all_dumb',\n", " 'name': 'Sim_all_dumb',\n",
" 'network_agents': [{'agent_type': 'DumbViewer',\n", " 'network_agents': [{'agent_class': 'DumbViewer',\n",
" 'state': {'has_tv': False},\n", " 'state': {'has_tv': False},\n",
" 'weight': 1},\n", " 'weight': 1},\n",
" {'agent_type': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n", " {'agent_class': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n",
" 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n", " 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n",
" 'num_trials': 50,\n", " 'num_trials': 50,\n",
" 'seed': 'None',\n", " 'seed': 'None',\n",
@@ -51819,10 +51819,10 @@
" 'load_module': 'newsspread',\n", " 'load_module': 'newsspread',\n",
" 'max_time': 30,\n", " 'max_time': 30,\n",
" 'name': 'Sim_all_dumb',\n", " 'name': 'Sim_all_dumb',\n",
" 'network_agents': [{'agent_type': 'DumbViewer',\n", " 'network_agents': [{'agent_class': 'DumbViewer',\n",
" 'state': {'has_tv': False},\n", " 'state': {'has_tv': False},\n",
" 'weight': 1},\n", " 'weight': 1},\n",
" {'agent_type': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n", " {'agent_class': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n",
" 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n", " 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n",
" 'num_trials': 50,\n", " 'num_trials': 50,\n",
" 'seed': 'None',\n", " 'seed': 'None',\n",
@@ -53436,10 +53436,10 @@
" 'load_module': 'newsspread',\n", " 'load_module': 'newsspread',\n",
" 'max_time': 30,\n", " 'max_time': 30,\n",
" 'name': 'Sim_all_dumb',\n", " 'name': 'Sim_all_dumb',\n",
" 'network_agents': [{'agent_type': 'DumbViewer',\n", " 'network_agents': [{'agent_class': 'DumbViewer',\n",
" 'state': {'has_tv': False},\n", " 'state': {'has_tv': False},\n",
" 'weight': 1},\n", " 'weight': 1},\n",
" {'agent_type': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n", " {'agent_class': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n",
" 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n", " 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n",
" 'num_trials': 50,\n", " 'num_trials': 50,\n",
" 'seed': 'None',\n", " 'seed': 'None',\n",
@@ -55041,10 +55041,10 @@
" 'load_module': 'newsspread',\n", " 'load_module': 'newsspread',\n",
" 'max_time': 30,\n", " 'max_time': 30,\n",
" 'name': 'Sim_all_dumb',\n", " 'name': 'Sim_all_dumb',\n",
" 'network_agents': [{'agent_type': 'DumbViewer',\n", " 'network_agents': [{'agent_class': 'DumbViewer',\n",
" 'state': {'has_tv': False},\n", " 'state': {'has_tv': False},\n",
" 'weight': 1},\n", " 'weight': 1},\n",
" {'agent_type': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n", " {'agent_class': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n",
" 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n", " 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n",
" 'num_trials': 50,\n", " 'num_trials': 50,\n",
" 'seed': 'None',\n", " 'seed': 'None',\n",
@@ -56655,10 +56655,10 @@
" 'load_module': 'newsspread',\n", " 'load_module': 'newsspread',\n",
" 'max_time': 30,\n", " 'max_time': 30,\n",
" 'name': 'Sim_all_dumb',\n", " 'name': 'Sim_all_dumb',\n",
" 'network_agents': [{'agent_type': 'DumbViewer',\n", " 'network_agents': [{'agent_class': 'DumbViewer',\n",
" 'state': {'has_tv': False},\n", " 'state': {'has_tv': False},\n",
" 'weight': 1},\n", " 'weight': 1},\n",
" {'agent_type': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n", " {'agent_class': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n",
" 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n", " 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n",
" 'num_trials': 50,\n", " 'num_trials': 50,\n",
" 'seed': 'None',\n", " 'seed': 'None',\n",
@@ -58257,10 +58257,10 @@
" 'load_module': 'newsspread',\n", " 'load_module': 'newsspread',\n",
" 'max_time': 30,\n", " 'max_time': 30,\n",
" 'name': 'Sim_all_dumb',\n", " 'name': 'Sim_all_dumb',\n",
" 'network_agents': [{'agent_type': 'DumbViewer',\n", " 'network_agents': [{'agent_class': 'DumbViewer',\n",
" 'state': {'has_tv': False},\n", " 'state': {'has_tv': False},\n",
" 'weight': 1},\n", " 'weight': 1},\n",
" {'agent_type': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n", " {'agent_class': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n",
" 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n", " 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n",
" 'num_trials': 50,\n", " 'num_trials': 50,\n",
" 'seed': 'None',\n", " 'seed': 'None',\n",
@@ -59877,10 +59877,10 @@
" 'load_module': 'newsspread',\n", " 'load_module': 'newsspread',\n",
" 'max_time': 30,\n", " 'max_time': 30,\n",
" 'name': 'Sim_all_dumb',\n", " 'name': 'Sim_all_dumb',\n",
" 'network_agents': [{'agent_type': 'DumbViewer',\n", " 'network_agents': [{'agent_class': 'DumbViewer',\n",
" 'state': {'has_tv': False},\n", " 'state': {'has_tv': False},\n",
" 'weight': 1},\n", " 'weight': 1},\n",
" {'agent_type': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n", " {'agent_class': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n",
" 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n", " 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n",
" 'num_trials': 50,\n", " 'num_trials': 50,\n",
" 'seed': 'None',\n", " 'seed': 'None',\n",
@@ -61494,10 +61494,10 @@
" 'load_module': 'newsspread',\n", " 'load_module': 'newsspread',\n",
" 'max_time': 30,\n", " 'max_time': 30,\n",
" 'name': 'Sim_all_dumb',\n", " 'name': 'Sim_all_dumb',\n",
" 'network_agents': [{'agent_type': 'DumbViewer',\n", " 'network_agents': [{'agent_class': 'DumbViewer',\n",
" 'state': {'has_tv': False},\n", " 'state': {'has_tv': False},\n",
" 'weight': 1},\n", " 'weight': 1},\n",
" {'agent_type': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n", " {'agent_class': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n",
" 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n", " 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n",
" 'num_trials': 50,\n", " 'num_trials': 50,\n",
" 'seed': 'None',\n", " 'seed': 'None',\n",
@@ -63108,10 +63108,10 @@
" 'load_module': 'newsspread',\n", " 'load_module': 'newsspread',\n",
" 'max_time': 30,\n", " 'max_time': 30,\n",
" 'name': 'Sim_all_dumb',\n", " 'name': 'Sim_all_dumb',\n",
" 'network_agents': [{'agent_type': 'DumbViewer',\n", " 'network_agents': [{'agent_class': 'DumbViewer',\n",
" 'state': {'has_tv': False},\n", " 'state': {'has_tv': False},\n",
" 'weight': 1},\n", " 'weight': 1},\n",
" {'agent_type': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n", " {'agent_class': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n",
" 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n", " 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n",
" 'num_trials': 50,\n", " 'num_trials': 50,\n",
" 'seed': 'None',\n", " 'seed': 'None',\n",
@@ -64713,10 +64713,10 @@
" 'load_module': 'newsspread',\n", " 'load_module': 'newsspread',\n",
" 'max_time': 30,\n", " 'max_time': 30,\n",
" 'name': 'Sim_all_dumb',\n", " 'name': 'Sim_all_dumb',\n",
" 'network_agents': [{'agent_type': 'DumbViewer',\n", " 'network_agents': [{'agent_class': 'DumbViewer',\n",
" 'state': {'has_tv': False},\n", " 'state': {'has_tv': False},\n",
" 'weight': 1},\n", " 'weight': 1},\n",
" {'agent_type': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n", " {'agent_class': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n",
" 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n", " 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n",
" 'num_trials': 50,\n", " 'num_trials': 50,\n",
" 'seed': 'None',\n", " 'seed': 'None',\n",
@@ -66330,10 +66330,10 @@
" 'load_module': 'newsspread',\n", " 'load_module': 'newsspread',\n",
" 'max_time': 30,\n", " 'max_time': 30,\n",
" 'name': 'Sim_all_dumb',\n", " 'name': 'Sim_all_dumb',\n",
" 'network_agents': [{'agent_type': 'DumbViewer',\n", " 'network_agents': [{'agent_class': 'DumbViewer',\n",
" 'state': {'has_tv': False},\n", " 'state': {'has_tv': False},\n",
" 'weight': 1},\n", " 'weight': 1},\n",
" {'agent_type': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n", " {'agent_class': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n",
" 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n", " 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n",
" 'num_trials': 50,\n", " 'num_trials': 50,\n",
" 'seed': 'None',\n", " 'seed': 'None',\n",
@@ -67947,10 +67947,10 @@
" 'load_module': 'newsspread',\n", " 'load_module': 'newsspread',\n",
" 'max_time': 30,\n", " 'max_time': 30,\n",
" 'name': 'Sim_all_dumb',\n", " 'name': 'Sim_all_dumb',\n",
" 'network_agents': [{'agent_type': 'DumbViewer',\n", " 'network_agents': [{'agent_class': 'DumbViewer',\n",
" 'state': {'has_tv': False},\n", " 'state': {'has_tv': False},\n",
" 'weight': 1},\n", " 'weight': 1},\n",
" {'agent_type': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n", " {'agent_class': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n",
" 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n", " 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n",
" 'num_trials': 50,\n", " 'num_trials': 50,\n",
" 'seed': 'None',\n", " 'seed': 'None',\n",
@@ -69561,10 +69561,10 @@
" 'load_module': 'newsspread',\n", " 'load_module': 'newsspread',\n",
" 'max_time': 30,\n", " 'max_time': 30,\n",
" 'name': 'Sim_all_dumb',\n", " 'name': 'Sim_all_dumb',\n",
" 'network_agents': [{'agent_type': 'DumbViewer',\n", " 'network_agents': [{'agent_class': 'DumbViewer',\n",
" 'state': {'has_tv': False},\n", " 'state': {'has_tv': False},\n",
" 'weight': 1},\n", " 'weight': 1},\n",
" {'agent_type': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n", " {'agent_class': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n",
" 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n", " 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n",
" 'num_trials': 50,\n", " 'num_trials': 50,\n",
" 'seed': 'None',\n", " 'seed': 'None',\n",
@@ -71178,10 +71178,10 @@
" 'load_module': 'newsspread',\n", " 'load_module': 'newsspread',\n",
" 'max_time': 30,\n", " 'max_time': 30,\n",
" 'name': 'Sim_all_dumb',\n", " 'name': 'Sim_all_dumb',\n",
" 'network_agents': [{'agent_type': 'DumbViewer',\n", " 'network_agents': [{'agent_class': 'DumbViewer',\n",
" 'state': {'has_tv': False},\n", " 'state': {'has_tv': False},\n",
" 'weight': 1},\n", " 'weight': 1},\n",
" {'agent_type': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n", " {'agent_class': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n",
" 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n", " 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n",
" 'num_trials': 50,\n", " 'num_trials': 50,\n",
" 'seed': 'None',\n", " 'seed': 'None',\n",
@@ -72801,10 +72801,10 @@
" 'load_module': 'newsspread',\n", " 'load_module': 'newsspread',\n",
" 'max_time': 30,\n", " 'max_time': 30,\n",
" 'name': 'Sim_all_dumb',\n", " 'name': 'Sim_all_dumb',\n",
" 'network_agents': [{'agent_type': 'DumbViewer',\n", " 'network_agents': [{'agent_class': 'DumbViewer',\n",
" 'state': {'has_tv': False},\n", " 'state': {'has_tv': False},\n",
" 'weight': 1},\n", " 'weight': 1},\n",
" {'agent_type': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n", " {'agent_class': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n",
" 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n", " 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n",
" 'num_trials': 50,\n", " 'num_trials': 50,\n",
" 'seed': 'None',\n", " 'seed': 'None',\n",
@@ -74418,10 +74418,10 @@
" 'load_module': 'newsspread',\n", " 'load_module': 'newsspread',\n",
" 'max_time': 30,\n", " 'max_time': 30,\n",
" 'name': 'Sim_all_dumb',\n", " 'name': 'Sim_all_dumb',\n",
" 'network_agents': [{'agent_type': 'DumbViewer',\n", " 'network_agents': [{'agent_class': 'DumbViewer',\n",
" 'state': {'has_tv': False},\n", " 'state': {'has_tv': False},\n",
" 'weight': 1},\n", " 'weight': 1},\n",
" {'agent_type': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n", " {'agent_class': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n",
" 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n", " 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n",
" 'num_trials': 50,\n", " 'num_trials': 50,\n",
" 'seed': 'None',\n", " 'seed': 'None',\n",
@@ -76035,10 +76035,10 @@
" 'load_module': 'newsspread',\n", " 'load_module': 'newsspread',\n",
" 'max_time': 30,\n", " 'max_time': 30,\n",
" 'name': 'Sim_all_dumb',\n", " 'name': 'Sim_all_dumb',\n",
" 'network_agents': [{'agent_type': 'DumbViewer',\n", " 'network_agents': [{'agent_class': 'DumbViewer',\n",
" 'state': {'has_tv': False},\n", " 'state': {'has_tv': False},\n",
" 'weight': 1},\n", " 'weight': 1},\n",
" {'agent_type': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n", " {'agent_class': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n",
" 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n", " 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n",
" 'num_trials': 50,\n", " 'num_trials': 50,\n",
" 'seed': 'None',\n", " 'seed': 'None',\n",
@@ -77643,10 +77643,10 @@
" 'load_module': 'newsspread',\n", " 'load_module': 'newsspread',\n",
" 'max_time': 30,\n", " 'max_time': 30,\n",
" 'name': 'Sim_all_dumb',\n", " 'name': 'Sim_all_dumb',\n",
" 'network_agents': [{'agent_type': 'DumbViewer',\n", " 'network_agents': [{'agent_class': 'DumbViewer',\n",
" 'state': {'has_tv': False},\n", " 'state': {'has_tv': False},\n",
" 'weight': 1},\n", " 'weight': 1},\n",
" {'agent_type': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n", " {'agent_class': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n",
" 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n", " 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n",
" 'num_trials': 50,\n", " 'num_trials': 50,\n",
" 'seed': 'None',\n", " 'seed': 'None',\n",
@@ -79260,10 +79260,10 @@
" 'load_module': 'newsspread',\n", " 'load_module': 'newsspread',\n",
" 'max_time': 30,\n", " 'max_time': 30,\n",
" 'name': 'Sim_all_dumb',\n", " 'name': 'Sim_all_dumb',\n",
" 'network_agents': [{'agent_type': 'DumbViewer',\n", " 'network_agents': [{'agent_class': 'DumbViewer',\n",
" 'state': {'has_tv': False},\n", " 'state': {'has_tv': False},\n",
" 'weight': 1},\n", " 'weight': 1},\n",
" {'agent_type': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n", " {'agent_class': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n",
" 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n", " 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n",
" 'num_trials': 50,\n", " 'num_trials': 50,\n",
" 'seed': 'None',\n", " 'seed': 'None',\n",

View File

@@ -1,27 +1,54 @@
--- ---
version: '2'
name: simple name: simple
group: tests group: tests
dir_path: "/tmp/" dir_path: "/tmp/"
num_trials: 3 num_trials: 3
max_time: 100 max_steps: 100
interval: 1 interval: 1
seed: "CompleteSeed!" seed: "CompleteSeed!"
network_params: model_class: Environment
generator: complete_graph model_params:
n: 10
network_agents:
- agent_type: CounterModel
weight: 1
state:
state_id: 0
- agent_type: AggregatedCounter
weight: 0.2
environment_agents: []
environment_class: Environment
environment_params:
am_i_complete: true am_i_complete: true
default_state: topology:
incidents: 0 params:
states: generator: complete_graph
- name: 'The first node' n: 12
- name: 'The second node' environment:
agents:
agent_class: CounterModel
topology: true
state:
times: 1
# In this group we are not specifying any topology
fixed:
- name: 'Environment Agent 1'
agent_class: BaseAgent
group: environment
topology: false
hidden: true
state:
times: 10
- agent_class: CounterModel
id: 0
group: fixed_counters
state:
times: 1
total: 0
- agent_class: CounterModel
group: fixed_counters
id: 1
distribution:
- agent_class: CounterModel
weight: 1
group: distro_counters
state:
times: 3
- agent_class: AggregatedCounter
weight: 0.2
override:
- filter:
agent_class: AggregatedCounter
n: 2
state:
times: 5

View File

@@ -2,7 +2,7 @@
name: custom-generator name: custom-generator
description: Using a custom generator for the network description: Using a custom generator for the network
num_trials: 3 num_trials: 3
max_time: 100 max_steps: 100
interval: 1 interval: 1
network_params: network_params:
generator: mymodule.mygenerator generator: mymodule.mygenerator
@@ -10,7 +10,7 @@ network_params:
n: 10 n: 10
n_edges: 5 n_edges: 5
network_agents: network_agents:
- agent_type: CounterModel - agent_class: CounterModel
weight: 1 weight: 1
state: state:
state_id: 0 state_id: 0

View File

@@ -1,27 +1,22 @@
from networkx import Graph from networkx import Graph
import random
import networkx as nx import networkx as nx
from random import choice
def mygenerator(n=5, n_edges=5): def mygenerator(n=5, n_edges=5):
''' """
Just a simple generator that creates a network with n nodes and Just a simple generator that creates a network with n nodes and
n_edges edges. Edges are assigned randomly, only avoiding self loops. n_edges edges. Edges are assigned randomly, only avoiding self loops.
''' """
G = nx.Graph() G = nx.Graph()
for i in range(n): for i in range(n):
G.add_node(i) G.add_node(i)
for i in range(n_edges): for i in range(n_edges):
nodes = list(G.nodes) nodes = list(G.nodes)
n_in = choice(nodes) n_in = random.choice(nodes)
nodes.remove(n_in) # Avoid loops nodes.remove(n_in) # Avoid loops
n_out = choice(nodes) n_out = random.choice(nodes)
G.add_edge(n_in, n_out) G.add_edge(n_in, n_out)
return G return G

View File

@@ -2,34 +2,37 @@ from soil.agents import FSM, state, default_state
class Fibonacci(FSM): class Fibonacci(FSM):
'''Agent that only executes in t_steps that are Fibonacci numbers''' """Agent that only executes in t_steps that are Fibonacci numbers"""
defaults = { defaults = {"prev": 1}
'prev': 1
}
@default_state @default_state
@state @state
def counting(self): def counting(self):
self.log('Stopping at {}'.format(self.now)) self.log("Stopping at {}".format(self.now))
prev, self['prev'] = self['prev'], max([self.now, self['prev']]) prev, self["prev"] = self["prev"], max([self.now, self["prev"]])
return None, self.env.timeout(prev) return None, self.env.timeout(prev)
class Odds(FSM): class Odds(FSM):
'''Agent that only executes in odd t_steps''' """Agent that only executes in odd t_steps"""
@default_state @default_state
@state @state
def odds(self): def odds(self):
self.log('Stopping at {}'.format(self.now)) self.log("Stopping at {}".format(self.now))
return None, self.env.timeout(1+self.now%2) return None, self.env.timeout(1 + self.now % 2)
if __name__ == '__main__':
import logging if __name__ == "__main__":
logging.basicConfig(level=logging.INFO)
from soil import Simulation from soil import Simulation
s = Simulation(network_agents=[{'ids': [0], 'agent_type': Fibonacci},
{'ids': [1], 'agent_type': Odds}], s = Simulation(
network_params={"generator": "complete_graph", "n": 2}, network_agents=[
max_time=100, {"ids": [0], "agent_class": Fibonacci},
) {"ids": [1], "agent_class": Odds},
],
network_params={"generator": "complete_graph", "n": 2},
max_time=100,
)
s.run(dry_run=True) s.run(dry_run=True)

View File

@@ -0,0 +1,7 @@
This example can be run like with command-line options, like this:
```bash
python cars.py --level DEBUG -e summary --csv
```
This will set the `CSV` (save the agent and model data to a CSV) and `summary` (print the a summary of the data to stdout) exporters, and set the log level to DEBUG.

View File

@@ -0,0 +1,243 @@
"""
This is an example of a simplified city, where there are Passengers and Drivers that can take those passengers
from their location to their desired location.
An example scenario could play like the following:
- Drivers start in the `wandering` state, where they wander around the city until they have been assigned a journey
- Passenger(1) tells every driver that it wants to request a Journey.
- Each driver receives the request.
If Driver(2) is interested in providing the Journey, it asks Passenger(1) to confirm that it accepts Driver(2)'s request
- When Passenger(1) accepts the request, two things happen:
- Passenger(1) changes its state to `driving_home`
- Driver(2) starts moving towards the origin of the Journey
- Once Driver(2) reaches the origin, it starts moving itself and Passenger(1) to the destination of the Journey
- When Driver(2) reaches the destination (carrying Passenger(1) along):
- Driver(2) starts wondering again
- Passenger(1) dies, and is removed from the simulation
- If there are no more passengers available in the simulation, Drivers die
"""
from __future__ import annotations
from soil import *
from soil import events
from mesa.space import MultiGrid
# More complex scenarios may use more than one type of message between objects.
# A common pattern is to use `enum.Enum` to represent state changes in a request.
@dataclass
class Journey:
"""
This represents a request for a journey. Passengers and drivers exchange this object.
A journey may have a driver assigned or not. If the driver has not been assigned, this
object is considered a "request for a journey".
"""
origin: (int, int)
destination: (int, int)
tip: float
passenger: Passenger
driver: Driver = None
class City(EventedEnvironment):
"""
An environment with a grid where drivers and passengers will be placed.
The number of drivers and riders is configurable through its parameters:
:param str n_cars: The total number of drivers to add
:param str n_passengers: The number of passengers in the simulation
:param list agents: Specific agents to use in the simulation. It overrides the `n_passengers`
and `n_cars` params.
:param int height: Height of the internal grid
:param int width: Width of the internal grid
"""
def __init__(
self,
*args,
n_cars=1,
n_passengers=10,
height=100,
width=100,
agents=None,
model_reporters=None,
**kwargs,
):
self.grid = MultiGrid(width=width, height=height, torus=False)
if agents is None:
agents = []
for i in range(n_cars):
agents.append({"agent_class": Driver})
for i in range(n_passengers):
agents.append({"agent_class": Passenger})
model_reporters = model_reporters or {
"earnings": "total_earnings",
"n_passengers": "number_passengers",
}
print("REPORTERS", model_reporters)
super().__init__(
*args, agents=agents, model_reporters=model_reporters, **kwargs
)
for agent in self.agents:
self.grid.place_agent(agent, (0, 0))
self.grid.move_to_empty(agent)
@property
def total_earnings(self):
return sum(d.earnings for d in self.agents(agent_class=Driver))
@property
def number_passengers(self):
return self.count_agents(agent_class=Passenger)
class Driver(Evented, FSM):
pos = None
journey = None
earnings = 0
def on_receive(self, msg, sender):
"""This is not a state. It will run (and block) every time check_messages is invoked"""
if self.journey is None and isinstance(msg, Journey) and msg.driver is None:
msg.driver = self
self.journey = msg
def check_passengers(self):
"""If there are no more passengers, stop forever"""
c = self.count_agents(agent_class=Passenger)
self.info(f"Passengers left {c}")
if not c:
self.die()
@default_state
@state
def wandering(self):
"""Move around the city until a journey is accepted"""
target = None
self.check_passengers()
self.journey = None
while self.journey is None: # No potential journeys detected (see on_receive)
if target is None or not self.move_towards(target):
target = self.random.choice(
self.model.grid.get_neighborhood(self.pos, moore=False)
)
self.check_passengers()
# This will call on_receive behind the scenes, and the agent's status will be updated
self.check_messages()
yield Delta(30) # Wait at least 30 seconds before checking again
try:
# Re-send the journey to the passenger, to confirm that we have been selected
self.journey = yield self.journey.passenger.ask(self.journey, timeout=60)
except events.TimedOut:
# No journey has been accepted. Try again
self.journey = None
return
return self.driving
@state
def driving(self):
"""The journey has been accepted. Pick them up and take them to their destination"""
while self.move_towards(self.journey.origin):
yield
while self.move_towards(self.journey.destination, with_passenger=True):
yield
self.earnings += self.journey.tip
self.check_passengers()
return self.wandering
def move_towards(self, target, with_passenger=False):
"""Move one cell at a time towards a target"""
self.info(f"Moving { self.pos } -> { target }")
if target[0] == self.pos[0] and target[1] == self.pos[1]:
return False
next_pos = [self.pos[0], self.pos[1]]
for idx in [0, 1]:
if self.pos[idx] < target[idx]:
next_pos[idx] += 1
break
if self.pos[idx] > target[idx]:
next_pos[idx] -= 1
break
self.model.grid.move_agent(self, tuple(next_pos))
if with_passenger:
self.journey.passenger.pos = (
self.pos
) # This could be communicated through messages
return True
class Passenger(Evented, FSM):
pos = None
def on_receive(self, msg, sender):
"""This is not a state. It will be run synchronously every time `check_messages` is run"""
if isinstance(msg, Journey):
self.journey = msg
return msg
@default_state
@state
def asking(self):
destination = (
self.random.randint(0, self.model.grid.height),
self.random.randint(0, self.model.grid.width),
)
self.journey = None
journey = Journey(
origin=self.pos,
destination=destination,
tip=self.random.randint(10, 100),
passenger=self,
)
timeout = 60
expiration = self.now + timeout
self.model.broadcast(journey, ttl=timeout, sender=self, agent_class=Driver)
while not self.journey:
self.info(f"Passenger at: { self.pos }. Checking for responses.")
try:
# This will call check_messages behind the scenes, and the agent's status will be updated
# If you want to avoid that, you can call it with: check=False
yield self.received(expiration=expiration)
except events.TimedOut:
self.info(f"Passenger at: { self.pos }. Asking for journey.")
self.model.broadcast(
journey, ttl=timeout, sender=self, agent_class=Driver
)
expiration = self.now + timeout
return self.driving_home
@state
def driving_home(self):
while (
self.pos[0] != self.journey.destination[0]
or self.pos[1] != self.journey.destination[1]
):
try:
yield self.received(timeout=60)
except events.TimedOut:
pass
self.info("Got home safe!")
self.die()
simulation = Simulation(
name="RideHailing",
model_class=City,
model_params={"n_passengers": 2},
seed="carsSeed",
)
if __name__ == "__main__":
with easy(simulation) as s:
s.run()

View File

@@ -3,19 +3,17 @@ name: mesa_sim
group: tests group: tests
dir_path: "/tmp" dir_path: "/tmp"
num_trials: 3 num_trials: 3
max_time: 100 max_steps: 100
interval: 1 interval: 1
seed: '1' seed: '1'
network_params: model_class: social_wealth.MoneyEnv
model_params:
generator: social_wealth.graph_generator generator: social_wealth.graph_generator
n: 5 agents:
network_agents: topology: true
- agent_type: social_wealth.SocialMoneyAgent distribution:
weight: 1 - agent_class: social_wealth.SocialMoneyAgent
environment_class: social_wealth.MoneyEnv weight: 1
environment_params:
num_mesa_agents: 5
mesa_agent_type: social_wealth.MoneyAgent
N: 10 N: 10
width: 50 width: 50
height: 50 height: 50

View File

@@ -2,6 +2,7 @@ from mesa.visualization.ModularVisualization import ModularServer
from soil.visualization import UserSettableParameter from soil.visualization import UserSettableParameter
from mesa.visualization.modules import ChartModule, NetworkModule, CanvasGrid from mesa.visualization.modules import ChartModule, NetworkModule, CanvasGrid
from social_wealth import MoneyEnv, graph_generator, SocialMoneyAgent from social_wealth import MoneyEnv, graph_generator, SocialMoneyAgent
import networkx as nx
class MyNetwork(NetworkModule): class MyNetwork(NetworkModule):
@@ -13,15 +14,18 @@ def network_portrayal(env):
# The model ensures there is 0 or 1 agent per node # The model ensures there is 0 or 1 agent per node
portrayal = dict() portrayal = dict()
wealths = {
node_id: data["agent"].wealth for (node_id, data) in env.G.nodes(data=True)
}
portrayal["nodes"] = [ portrayal["nodes"] = [
{ {
"id": agent_id, "id": node_id,
"size": env.get_agent(agent_id).wealth, "size": 2 * (wealth + 1),
# "color": "#CC0000" if not agents or agents[0].wealth == 0 else "#007959", "color": "#CC0000" if wealth == 0 else "#007959",
"color": "#CC0000", # "color": "#CC0000",
"label": f"{agent_id}: {env.get_agent(agent_id).wealth}", "label": f"{node_id}: {wealth}",
} }
for (agent_id) in env.G.nodes for (node_id, wealth) in wealths.items()
] ]
portrayal["edges"] = [ portrayal["edges"] = [
@@ -29,7 +33,6 @@ def network_portrayal(env):
for edge_id, (source, target) in enumerate(env.G.edges) for edge_id, (source, target) in enumerate(env.G.edges)
] ]
return portrayal return portrayal
@@ -40,7 +43,7 @@ def gridPortrayal(agent):
:param agent: the agent in the simulation :param agent: the agent in the simulation
:return: the portrayal dictionary :return: the portrayal dictionary
""" """
color = max(10, min(agent.wealth*10, 100)) color = max(10, min(agent.wealth * 10, 100))
return { return {
"Shape": "rect", "Shape": "rect",
"w": 1, "w": 1,
@@ -51,11 +54,11 @@ def gridPortrayal(agent):
"Text": agent.unique_id, "Text": agent.unique_id,
"x": agent.pos[0], "x": agent.pos[0],
"y": agent.pos[1], "y": agent.pos[1],
"Color": f"rgba(31, 10, 255, 0.{color})" "Color": f"rgba(31, 10, 255, 0.{color})",
} }
grid = MyNetwork(network_portrayal, 500, 500, library="sigma") grid = MyNetwork(network_portrayal, 500, 500)
chart = ChartModule( chart = ChartModule(
[{"Label": "Gini", "Color": "Black"}], data_collector_name="datacollector" [{"Label": "Gini", "Color": "Black"}], data_collector_name="datacollector"
) )
@@ -70,7 +73,6 @@ model_params = {
1, 1,
description="Choose how many agents to include in the model", description="Choose how many agents to include in the model",
), ),
"network_agents": [{"agent_type": SocialMoneyAgent}],
"height": UserSettableParameter( "height": UserSettableParameter(
"slider", "slider",
"height", "height",
@@ -79,7 +81,7 @@ model_params = {
10, 10,
1, 1,
description="Grid height", description="Grid height",
), ),
"width": UserSettableParameter( "width": UserSettableParameter(
"slider", "slider",
"width", "width",
@@ -88,13 +90,20 @@ model_params = {
10, 10,
1, 1,
description="Grid width", description="Grid width",
), ),
"network_params": { "agent_class": UserSettableParameter(
'generator': graph_generator "choice",
}, "Agent class",
value="MoneyAgent",
choices=["MoneyAgent", "SocialMoneyAgent"],
),
"generator": graph_generator,
} }
canvas_element = CanvasGrid(gridPortrayal, model_params["width"].value, model_params["height"].value, 500, 500)
canvas_element = CanvasGrid(
gridPortrayal, model_params["width"].value, model_params["height"].value, 500, 500
)
server = ModularServer( server = ModularServer(

View File

@@ -1,23 +1,26 @@
''' """
This is an example that adds soil agents and environment in a normal This is an example that adds soil agents and environment in a normal
mesa workflow. mesa workflow.
''' """
from mesa import Agent as MesaAgent from mesa import Agent as MesaAgent
from mesa.space import MultiGrid from mesa.space import MultiGrid
# from mesa.time import RandomActivation # from mesa.time import RandomActivation
from mesa.datacollection import DataCollector from mesa.datacollection import DataCollector
from mesa.batchrunner import BatchRunner from mesa.batchrunner import BatchRunner
import networkx as nx import networkx as nx
from soil import NetworkAgent, Environment from soil import NetworkAgent, Environment, serialization
def compute_gini(model): def compute_gini(model):
agent_wealths = [agent.wealth for agent in model.agents] agent_wealths = [agent.wealth for agent in model.agents]
x = sorted(agent_wealths) x = sorted(agent_wealths)
N = len(list(model.agents)) N = len(list(model.agents))
B = sum( xi * (N-i) for i,xi in enumerate(x) ) / (N*sum(x)) B = sum(xi * (N - i) for i, xi in enumerate(x)) / (N * sum(x))
return (1 + (1/N) - 2*B) return 1 + (1 / N) - 2 * B
class MoneyAgent(MesaAgent): class MoneyAgent(MesaAgent):
""" """
@@ -25,15 +28,14 @@ class MoneyAgent(MesaAgent):
It will only share wealth with neighbors based on grid proximity It will only share wealth with neighbors based on grid proximity
""" """
def __init__(self, unique_id, model): def __init__(self, unique_id, model, wealth=1):
super().__init__(unique_id=unique_id, model=model) super().__init__(unique_id=unique_id, model=model)
self.wealth = 1 self.wealth = wealth
def move(self): def move(self):
possible_steps = self.model.grid.get_neighborhood( possible_steps = self.model.grid.get_neighborhood(
self.pos, self.pos, moore=True, include_center=False
moore=True, )
include_center=False)
new_position = self.random.choice(possible_steps) new_position = self.random.choice(possible_steps)
self.model.grid.move_agent(self, new_position) self.model.grid.move_agent(self, new_position)
@@ -45,7 +47,7 @@ class MoneyAgent(MesaAgent):
self.wealth -= 1 self.wealth -= 1
def step(self): def step(self):
self.info("Crying wolf", self.pos) print("Crying wolf", self.pos)
self.move() self.move()
if self.wealth > 0: if self.wealth > 0:
self.give_money() self.give_money()
@@ -56,10 +58,10 @@ class SocialMoneyAgent(NetworkAgent, MoneyAgent):
def give_money(self): def give_money(self):
cellmates = set(self.model.grid.get_cell_list_contents([self.pos])) cellmates = set(self.model.grid.get_cell_list_contents([self.pos]))
friends = set(self.get_neighboring_agents()) friends = set(self.get_neighbors())
self.info("Trying to give money") self.info("Trying to give money")
self.debug("Cellmates: ", cellmates) self.info("Cellmates: ", cellmates)
self.debug("Friends: ", friends) self.info("Friends: ", friends)
nearby_friends = list(cellmates & friends) nearby_friends = list(cellmates & friends)
@@ -69,14 +71,35 @@ class SocialMoneyAgent(NetworkAgent, MoneyAgent):
self.wealth -= 1 self.wealth -= 1
def graph_generator(n=5):
G = nx.Graph()
for ix in range(n):
G.add_edge(0, ix)
return G
class MoneyEnv(Environment): class MoneyEnv(Environment):
"""A model with some number of agents.""" """A model with some number of agents."""
def __init__(self, N, width, height, *args, network_params, **kwargs):
network_params['n'] = N def __init__(
super().__init__(*args, network_params=network_params, **kwargs) self,
width,
height,
N,
generator=graph_generator,
agent_class=SocialMoneyAgent,
topology=None,
**kwargs
):
generator = serialization.deserialize(generator)
agent_class = serialization.deserialize(agent_class, globs=globals())
topology = generator(n=N)
super().__init__(topology=topology, N=N, **kwargs)
self.grid = MultiGrid(width, height, False) self.grid = MultiGrid(width, height, False)
self.populate_network(agent_class=agent_class)
# Create agents # Create agents
for agent in self.agents: for agent in self.agents:
x = self.random.randrange(self.grid.width) x = self.random.randrange(self.grid.width)
@@ -84,37 +107,31 @@ class MoneyEnv(Environment):
self.grid.place_agent(agent, (x, y)) self.grid.place_agent(agent, (x, y))
self.datacollector = DataCollector( self.datacollector = DataCollector(
model_reporters={"Gini": compute_gini}, model_reporters={"Gini": compute_gini}, agent_reporters={"Wealth": "wealth"}
agent_reporters={"Wealth": "wealth"}) )
def graph_generator(n=5): if __name__ == "__main__":
G = nx.Graph()
for ix in range(n):
G.add_edge(0, ix)
return G
if __name__ == '__main__': fixed_params = {
"generator": nx.complete_graph,
"width": 10,
G = graph_generator() "network_agents": [{"agent_class": SocialMoneyAgent, "weight": 1}],
fixed_params = {"topology": G, "height": 10,
"width": 10, }
"network_agents": [{"agent_type": SocialMoneyAgent,
'weight': 1}],
"height": 10}
variable_params = {"N": range(10, 100, 10)} variable_params = {"N": range(10, 100, 10)}
batch_run = BatchRunner(MoneyEnv, batch_run = BatchRunner(
variable_parameters=variable_params, MoneyEnv,
fixed_parameters=fixed_params, variable_parameters=variable_params,
iterations=5, fixed_parameters=fixed_params,
max_steps=100, iterations=5,
model_reporters={"Gini": compute_gini}) max_steps=100,
model_reporters={"Gini": compute_gini},
)
batch_run.run_all() batch_run.run_all()
run_data = batch_run.get_model_vars_dataframe() run_data = batch_run.get_model_vars_dataframe()
run_data.head() run_data.head()
print(run_data.Gini) print(run_data.Gini)

View File

@@ -4,24 +4,26 @@ from mesa.time import RandomActivation
from mesa.datacollection import DataCollector from mesa.datacollection import DataCollector
from mesa.batchrunner import BatchRunner from mesa.batchrunner import BatchRunner
def compute_gini(model): def compute_gini(model):
agent_wealths = [agent.wealth for agent in model.schedule.agents] agent_wealths = [agent.wealth for agent in model.schedule.agents]
x = sorted(agent_wealths) x = sorted(agent_wealths)
N = model.num_agents N = model.num_agents
B = sum( xi * (N-i) for i,xi in enumerate(x) ) / (N*sum(x)) B = sum(xi * (N - i) for i, xi in enumerate(x)) / (N * sum(x))
return (1 + (1/N) - 2*B) return 1 + (1 / N) - 2 * B
class MoneyAgent(Agent): class MoneyAgent(Agent):
""" An agent with fixed initial wealth.""" """An agent with fixed initial wealth."""
def __init__(self, unique_id, model): def __init__(self, unique_id, model):
super().__init__(unique_id, model) super().__init__(unique_id, model)
self.wealth = 1 self.wealth = 1
def move(self): def move(self):
possible_steps = self.model.grid.get_neighborhood( possible_steps = self.model.grid.get_neighborhood(
self.pos, self.pos, moore=True, include_center=False
moore=True, )
include_center=False)
new_position = self.random.choice(possible_steps) new_position = self.random.choice(possible_steps)
self.model.grid.move_agent(self, new_position) self.model.grid.move_agent(self, new_position)
@@ -37,8 +39,10 @@ class MoneyAgent(Agent):
if self.wealth > 0: if self.wealth > 0:
self.give_money() self.give_money()
class MoneyModel(Model): class MoneyModel(Model):
"""A model with some number of agents.""" """A model with some number of agents."""
def __init__(self, N, width, height): def __init__(self, N, width, height):
self.num_agents = N self.num_agents = N
self.grid = MultiGrid(width, height, True) self.grid = MultiGrid(width, height, True)
@@ -55,29 +59,29 @@ class MoneyModel(Model):
self.grid.place_agent(a, (x, y)) self.grid.place_agent(a, (x, y))
self.datacollector = DataCollector( self.datacollector = DataCollector(
model_reporters={"Gini": compute_gini}, model_reporters={"Gini": compute_gini}, agent_reporters={"Wealth": "wealth"}
agent_reporters={"Wealth": "wealth"}) )
def step(self): def step(self):
self.datacollector.collect(self) self.datacollector.collect(self)
self.schedule.step() self.schedule.step()
if __name__ == '__main__': if __name__ == "__main__":
fixed_params = {"width": 10, fixed_params = {"width": 10, "height": 10}
"height": 10}
variable_params = {"N": range(10, 500, 10)} variable_params = {"N": range(10, 500, 10)}
batch_run = BatchRunner(MoneyModel, batch_run = BatchRunner(
variable_params, MoneyModel,
fixed_params, variable_params,
iterations=5, fixed_params,
max_steps=100, iterations=5,
model_reporters={"Gini": compute_gini}) max_steps=100,
model_reporters={"Gini": compute_gini},
)
batch_run.run_all() batch_run.run_all()
run_data = batch_run.get_model_vars_dataframe() run_data = batch_run.get_model_vars_dataframe()
run_data.head() run_data.head()
print(run_data.Gini) print(run_data.Gini)

View File

@@ -89,11 +89,11 @@
"max_time: 30\r\n", "max_time: 30\r\n",
"name: Sim_all_dumb\r\n", "name: Sim_all_dumb\r\n",
"network_agents:\r\n", "network_agents:\r\n",
"- agent_type: DumbViewer\r\n", "- agent_class: DumbViewer\r\n",
" state:\r\n", " state:\r\n",
" has_tv: false\r\n", " has_tv: false\r\n",
" weight: 1\r\n", " weight: 1\r\n",
"- agent_type: DumbViewer\r\n", "- agent_class: DumbViewer\r\n",
" state:\r\n", " state:\r\n",
" has_tv: true\r\n", " has_tv: true\r\n",
" weight: 1\r\n", " weight: 1\r\n",
@@ -113,19 +113,19 @@
"max_time: 30\r\n", "max_time: 30\r\n",
"name: Sim_half_herd\r\n", "name: Sim_half_herd\r\n",
"network_agents:\r\n", "network_agents:\r\n",
"- agent_type: DumbViewer\r\n", "- agent_class: DumbViewer\r\n",
" state:\r\n", " state:\r\n",
" has_tv: false\r\n", " has_tv: false\r\n",
" weight: 1\r\n", " weight: 1\r\n",
"- agent_type: DumbViewer\r\n", "- agent_class: DumbViewer\r\n",
" state:\r\n", " state:\r\n",
" has_tv: true\r\n", " has_tv: true\r\n",
" weight: 1\r\n", " weight: 1\r\n",
"- agent_type: HerdViewer\r\n", "- agent_class: HerdViewer\r\n",
" state:\r\n", " state:\r\n",
" has_tv: false\r\n", " has_tv: false\r\n",
" weight: 1\r\n", " weight: 1\r\n",
"- agent_type: HerdViewer\r\n", "- agent_class: HerdViewer\r\n",
" state:\r\n", " state:\r\n",
" has_tv: true\r\n", " has_tv: true\r\n",
" weight: 1\r\n", " weight: 1\r\n",
@@ -145,12 +145,12 @@
"max_time: 30\r\n", "max_time: 30\r\n",
"name: Sim_all_herd\r\n", "name: Sim_all_herd\r\n",
"network_agents:\r\n", "network_agents:\r\n",
"- agent_type: HerdViewer\r\n", "- agent_class: HerdViewer\r\n",
" state:\r\n", " state:\r\n",
" has_tv: true\r\n", " has_tv: true\r\n",
" id: neutral\r\n", " id: neutral\r\n",
" weight: 1\r\n", " weight: 1\r\n",
"- agent_type: HerdViewer\r\n", "- agent_class: HerdViewer\r\n",
" state:\r\n", " state:\r\n",
" has_tv: true\r\n", " has_tv: true\r\n",
" id: neutral\r\n", " id: neutral\r\n",
@@ -172,12 +172,12 @@
"max_time: 30\r\n", "max_time: 30\r\n",
"name: Sim_wise_herd\r\n", "name: Sim_wise_herd\r\n",
"network_agents:\r\n", "network_agents:\r\n",
"- agent_type: HerdViewer\r\n", "- agent_class: HerdViewer\r\n",
" state:\r\n", " state:\r\n",
" has_tv: true\r\n", " has_tv: true\r\n",
" id: neutral\r\n", " id: neutral\r\n",
" weight: 1\r\n", " weight: 1\r\n",
"- agent_type: WiseViewer\r\n", "- agent_class: WiseViewer\r\n",
" state:\r\n", " state:\r\n",
" has_tv: true\r\n", " has_tv: true\r\n",
" weight: 1\r\n", " weight: 1\r\n",
@@ -198,12 +198,12 @@
"max_time: 30\r\n", "max_time: 30\r\n",
"name: Sim_all_wise\r\n", "name: Sim_all_wise\r\n",
"network_agents:\r\n", "network_agents:\r\n",
"- agent_type: WiseViewer\r\n", "- agent_class: WiseViewer\r\n",
" state:\r\n", " state:\r\n",
" has_tv: true\r\n", " has_tv: true\r\n",
" id: neutral\r\n", " id: neutral\r\n",
" weight: 1\r\n", " weight: 1\r\n",
"- agent_type: WiseViewer\r\n", "- agent_class: WiseViewer\r\n",
" state:\r\n", " state:\r\n",
" has_tv: true\r\n", " has_tv: true\r\n",
" weight: 1\r\n", " weight: 1\r\n",

View File

@@ -1,19 +1,18 @@
--- ---
default_state: {} default_state: {}
load_module: newsspread
environment_agents: [] environment_agents: []
environment_params: environment_params:
prob_neighbor_spread: 0.0 prob_neighbor_spread: 0.0
prob_tv_spread: 0.01 prob_tv_spread: 0.01
interval: 1 interval: 1
max_time: 300 max_steps: 300
name: Sim_all_dumb name: Sim_all_dumb
network_agents: network_agents:
- agent_type: DumbViewer - agent_class: newsspread.DumbViewer
state: state:
has_tv: false has_tv: false
weight: 1 weight: 1
- agent_type: DumbViewer - agent_class: newsspread.DumbViewer
state: state:
has_tv: true has_tv: true
weight: 1 weight: 1
@@ -24,28 +23,27 @@ network_params:
num_trials: 50 num_trials: 50
--- ---
default_state: {} default_state: {}
load_module: newsspread
environment_agents: [] environment_agents: []
environment_params: environment_params:
prob_neighbor_spread: 0.0 prob_neighbor_spread: 0.0
prob_tv_spread: 0.01 prob_tv_spread: 0.01
interval: 1 interval: 1
max_time: 300 max_steps: 300
name: Sim_half_herd name: Sim_half_herd
network_agents: network_agents:
- agent_type: DumbViewer - agent_class: newsspread.DumbViewer
state: state:
has_tv: false has_tv: false
weight: 1 weight: 1
- agent_type: DumbViewer - agent_class: newsspread.DumbViewer
state: state:
has_tv: true has_tv: true
weight: 1 weight: 1
- agent_type: HerdViewer - agent_class: newsspread.HerdViewer
state: state:
has_tv: false has_tv: false
weight: 1 weight: 1
- agent_type: HerdViewer - agent_class: newsspread.HerdViewer
state: state:
has_tv: true has_tv: true
weight: 1 weight: 1
@@ -56,21 +54,20 @@ network_params:
num_trials: 50 num_trials: 50
--- ---
default_state: {} default_state: {}
load_module: newsspread
environment_agents: [] environment_agents: []
environment_params: environment_params:
prob_neighbor_spread: 0.0 prob_neighbor_spread: 0.0
prob_tv_spread: 0.01 prob_tv_spread: 0.01
interval: 1 interval: 1
max_time: 300 max_steps: 300
name: Sim_all_herd name: Sim_all_herd
network_agents: network_agents:
- agent_type: HerdViewer - agent_class: newsspread.HerdViewer
state: state:
has_tv: true has_tv: true
state_id: neutral state_id: neutral
weight: 1 weight: 1
- agent_type: HerdViewer - agent_class: newsspread.HerdViewer
state: state:
has_tv: true has_tv: true
state_id: neutral state_id: neutral
@@ -82,22 +79,21 @@ network_params:
num_trials: 50 num_trials: 50
--- ---
default_state: {} default_state: {}
load_module: newsspread
environment_agents: [] environment_agents: []
environment_params: environment_params:
prob_neighbor_spread: 0.0 prob_neighbor_spread: 0.0
prob_tv_spread: 0.01 prob_tv_spread: 0.01
prob_neighbor_cure: 0.1 prob_neighbor_cure: 0.1
interval: 1 interval: 1
max_time: 300 max_steps: 300
name: Sim_wise_herd name: Sim_wise_herd
network_agents: network_agents:
- agent_type: HerdViewer - agent_class: newsspread.HerdViewer
state: state:
has_tv: true has_tv: true
state_id: neutral state_id: neutral
weight: 1 weight: 1
- agent_type: WiseViewer - agent_class: newsspread.WiseViewer
state: state:
has_tv: true has_tv: true
weight: 1 weight: 1
@@ -108,22 +104,21 @@ network_params:
num_trials: 50 num_trials: 50
--- ---
default_state: {} default_state: {}
load_module: newsspread
environment_agents: [] environment_agents: []
environment_params: environment_params:
prob_neighbor_spread: 0.0 prob_neighbor_spread: 0.0
prob_tv_spread: 0.01 prob_tv_spread: 0.01
prob_neighbor_cure: 0.1 prob_neighbor_cure: 0.1
interval: 1 interval: 1
max_time: 300 max_steps: 300
name: Sim_all_wise name: Sim_all_wise
network_agents: network_agents:
- agent_type: WiseViewer - agent_class: newsspread.WiseViewer
state: state:
has_tv: true has_tv: true
state_id: neutral state_id: neutral
weight: 1 weight: 1
- agent_type: WiseViewer - agent_class: newsspread.WiseViewer
state: state:
has_tv: true has_tv: true
weight: 1 weight: 1

View File

@@ -1,79 +1,87 @@
from soil.agents import FSM, state, default_state, prob from soil.agents import FSM, NetworkAgent, state, default_state, prob
import logging import logging
class DumbViewer(FSM): class DumbViewer(FSM, NetworkAgent):
''' """
A viewer that gets infected via TV (if it has one) and tries to infect A viewer that gets infected via TV (if it has one) and tries to infect
its neighbors once it's infected. its neighbors once it's infected.
''' """
defaults = {
'prob_neighbor_spread': 0.5, prob_neighbor_spread = 0.5
'prob_tv_spread': 0.1, prob_tv_spread = 0.1
} has_been_infected = False
@default_state @default_state
@state @state
def neutral(self): def neutral(self):
if self['has_tv']: if self["has_tv"]:
if prob(self.env['prob_tv_spread']): if self.prob(self.model["prob_tv_spread"]):
self.set_state(self.infected) return self.infected
if self.has_been_infected:
return self.infected
@state @state
def infected(self): def infected(self):
for neighbor in self.get_neighboring_agents(state_id=self.neutral.id): for neighbor in self.get_neighbors(state_id=self.neutral.id):
if prob(self.env['prob_neighbor_spread']): if self.prob(self.model["prob_neighbor_spread"]):
neighbor.infect() neighbor.infect()
def infect(self): def infect(self):
self.set_state(self.infected) """
This is not a state. It is a function that other agents can use to try to
infect this agent. DumbViewer always gets infected, but other agents like
HerdViewer might not become infected right away
"""
self.has_been_infected = True
class HerdViewer(DumbViewer): class HerdViewer(DumbViewer):
''' """
A viewer whose probability of infection depends on the state of its neighbors. A viewer whose probability of infection depends on the state of its neighbors.
''' """
def infect(self): def infect(self):
infected = self.count_neighboring_agents(state_id=self.infected.id) """Notice again that this is NOT a state. See DumbViewer.infect for reference"""
total = self.count_neighboring_agents() infected = self.count_neighbors(state_id=self.infected.id)
prob_infect = self.env['prob_neighbor_spread'] * infected/total total = self.count_neighbors()
self.debug('prob_infect', prob_infect) prob_infect = self.model["prob_neighbor_spread"] * infected / total
if prob(prob_infect): self.debug("prob_infect", prob_infect)
self.set_state(self.infected.id) if self.prob(prob_infect):
self.has_been_infected = True
class WiseViewer(HerdViewer): class WiseViewer(HerdViewer):
''' """
A viewer that can change its mind. A viewer that can change its mind.
''' """
defaults = { defaults = {
'prob_neighbor_spread': 0.5, "prob_neighbor_spread": 0.5,
'prob_neighbor_cure': 0.25, "prob_neighbor_cure": 0.25,
'prob_tv_spread': 0.1, "prob_tv_spread": 0.1,
} }
@state @state
def cured(self): def cured(self):
prob_cure = self.env['prob_neighbor_cure'] prob_cure = self.model["prob_neighbor_cure"]
for neighbor in self.get_neighboring_agents(state_id=self.infected.id): for neighbor in self.get_neighbors(state_id=self.infected.id):
if prob(prob_cure): if self.prob(prob_cure):
try: try:
neighbor.cure() neighbor.cure()
except AttributeError: except AttributeError:
self.debug('Viewer {} cannot be cured'.format(neighbor.id)) self.debug("Viewer {} cannot be cured".format(neighbor.id))
def cure(self): def cure(self):
self.set_state(self.cured.id) self.has_been_cured = True
@state @state
def infected(self): def infected(self):
cured = max(self.count_neighboring_agents(self.cured.id), if self.has_been_cured:
1.0) return self.cured
infected = max(self.count_neighboring_agents(self.infected.id), cured = max(self.count_neighbors(self.cured.id), 1.0)
1.0) infected = max(self.count_neighbors(self.infected.id), 1.0)
prob_cure = self.env['prob_neighbor_cure'] * (cured/infected) prob_cure = self.model["prob_neighbor_cure"] * (cured / infected)
if prob(prob_cure): if self.prob(prob_cure):
return self.cure() return self.cured
return self.set_state(super().infected)

View File

@@ -1,6 +1,6 @@
''' """
Example of a fully programmatic simulation, without definition files. Example of a fully programmatic simulation, without definition files.
''' """
from soil import Simulation, agents from soil import Simulation, agents
from networkx import Graph from networkx import Graph
import logging import logging
@@ -14,25 +14,28 @@ def mygenerator():
class MyAgent(agents.FSM): class MyAgent(agents.FSM):
@agents.default_state @agents.default_state
@agents.state @agents.state
def neutral(self): def neutral(self):
self.info('I am running') self.debug("I am running")
if agents.prob(0.2):
self.info("This runs 2/10 times on average")
s = Simulation(name='Programmatic', s = Simulation(
network_params={'generator': mygenerator}, name="Programmatic",
num_trials=1, network_params={"generator": mygenerator},
max_time=100, num_trials=1,
agent_type=MyAgent, max_time=100,
dry_run=True) agent_class=MyAgent,
dry_run=True,
)
# By default, logging will only print WARNING logs (and above).
# You need to choose a lower logging level to get INFO/DEBUG traces
logging.basicConfig(level=logging.INFO) logging.basicConfig(level=logging.INFO)
envs = s.run() envs = s.run()
s.dump_yaml() # Uncomment this to output the simulation to a YAML file
# s.dump_yaml('simulation.yaml')
for env in envs:
env.dump_csv()

View File

@@ -1,12 +1,12 @@
from soil.agents import FSM, state, default_state from soil.agents import FSM, NetworkAgent, state, default_state
from soil import Environment from soil import Environment
from random import random, shuffle
from itertools import islice from itertools import islice
import logging import logging
class CityPubs(Environment): class CityPubs(Environment):
'''Environment with Pubs''' """Environment with Pubs"""
level = logging.INFO level = logging.INFO
def __init__(self, *args, number_of_pubs=3, pub_capacity=10, **kwargs): def __init__(self, *args, number_of_pubs=3, pub_capacity=10, **kwargs):
@@ -14,70 +14,70 @@ class CityPubs(Environment):
pubs = {} pubs = {}
for i in range(number_of_pubs): for i in range(number_of_pubs):
newpub = { newpub = {
'name': 'The awesome pub #{}'.format(i), "name": "The awesome pub #{}".format(i),
'open': True, "open": True,
'capacity': pub_capacity, "capacity": pub_capacity,
'occupancy': 0, "occupancy": 0,
} }
pubs[newpub['name']] = newpub pubs[newpub["name"]] = newpub
self['pubs'] = pubs self["pubs"] = pubs
def enter(self, pub_id, *nodes): def enter(self, pub_id, *nodes):
'''Agents will try to enter. The pub checks if it is possible''' """Agents will try to enter. The pub checks if it is possible"""
try: try:
pub = self['pubs'][pub_id] pub = self["pubs"][pub_id]
except KeyError: except KeyError:
raise ValueError('Pub {} is not available'.format(pub_id)) raise ValueError("Pub {} is not available".format(pub_id))
if not pub['open'] or (pub['capacity'] < (len(nodes) + pub['occupancy'])): if not pub["open"] or (pub["capacity"] < (len(nodes) + pub["occupancy"])):
return False return False
pub['occupancy'] += len(nodes) pub["occupancy"] += len(nodes)
for node in nodes: for node in nodes:
node['pub'] = pub_id node["pub"] = pub_id
return True return True
def available_pubs(self): def available_pubs(self):
for pub in self['pubs'].values(): for pub in self["pubs"].values():
if pub['open'] and (pub['occupancy'] < pub['capacity']): if pub["open"] and (pub["occupancy"] < pub["capacity"]):
yield pub['name'] yield pub["name"]
def exit(self, pub_id, *node_ids): def exit(self, pub_id, *node_ids):
'''Agents will notify the pub they want to leave''' """Agents will notify the pub they want to leave"""
try: try:
pub = self['pubs'][pub_id] pub = self["pubs"][pub_id]
except KeyError: except KeyError:
raise ValueError('Pub {} is not available'.format(pub_id)) raise ValueError("Pub {} is not available".format(pub_id))
for node_id in node_ids: for node_id in node_ids:
node = self.get_agent(node_id) node = self.get_agent(node_id)
if pub_id == node['pub']: if pub_id == node["pub"]:
del node['pub'] del node["pub"]
pub['occupancy'] -= 1 pub["occupancy"] -= 1
class Patron(FSM): class Patron(FSM, NetworkAgent):
'''Agent that looks for friends to drink with. It will do three things: """Agent that looks for friends to drink with. It will do three things:
1) Look for other patrons to drink with 1) Look for other patrons to drink with
2) Look for a bar where the agent and other agents in the same group can get in. 2) Look for a bar where the agent and other agents in the same group can get in.
3) While in the bar, patrons only drink, until they get drunk and taken home. 3) While in the bar, patrons only drink, until they get drunk and taken home.
''' """
level = logging.DEBUG level = logging.DEBUG
defaults = { pub = None
'pub': None, drunk = False
'drunk': False, pints = 0
'pints': 0, max_pints = 3
'max_pints': 3, kicked_out = False
}
@default_state @default_state
@state @state
def looking_for_friends(self): def looking_for_friends(self):
'''Look for friends to drink with''' """Look for friends to drink with"""
self.info('I am looking for friends') self.info("I am looking for friends")
available_friends = list(self.get_agents(drunk=False, available_friends = list(
pub=None, self.get_agents(drunk=False, pub=None, state_id=self.looking_for_friends.id)
state_id=self.looking_for_friends.id)) )
if not available_friends: if not available_friends:
self.info('Life sucks and I\'m alone!') self.info("Life sucks and I'm alone!")
return self.at_home return self.at_home
befriended = self.try_friends(available_friends) befriended = self.try_friends(available_friends)
if befriended: if befriended:
@@ -85,91 +85,91 @@ class Patron(FSM):
@state @state
def looking_for_pub(self): def looking_for_pub(self):
'''Look for a pub that accepts me and my friends''' """Look for a pub that accepts me and my friends"""
if self['pub'] != None: if self["pub"] != None:
return self.sober_in_pub return self.sober_in_pub
self.debug('I am looking for a pub') self.debug("I am looking for a pub")
group = list(self.get_neighboring_agents()) group = list(self.get_neighbors())
for pub in self.env.available_pubs(): for pub in self.model.available_pubs():
self.debug('We\'re trying to get into {}: total: {}'.format(pub, len(group))) self.debug("We're trying to get into {}: total: {}".format(pub, len(group)))
if self.env.enter(pub, self, *group): if self.model.enter(pub, self, *group):
self.info('We\'re all {} getting in {}!'.format(len(group), pub)) self.info("We're all {} getting in {}!".format(len(group), pub))
return self.sober_in_pub return self.sober_in_pub
@state @state
def sober_in_pub(self): def sober_in_pub(self):
'''Drink up.''' """Drink up."""
self.drink() self.drink()
if self['pints'] > self['max_pints']: if self["pints"] > self["max_pints"]:
return self.drunk_in_pub return self.drunk_in_pub
@state @state
def drunk_in_pub(self): def drunk_in_pub(self):
'''I'm out. Take me home!''' """I'm out. Take me home!"""
self.info('I\'m so drunk. Take me home!') self.info("I'm so drunk. Take me home!")
self['drunk'] = True self["drunk"] = True
pass # out drunk if self.kicked_out:
return self.at_home
pass # out drun
@state @state
def at_home(self): def at_home(self):
'''The end''' """The end"""
others = self.get_agents(state_id=Patron.at_home.id, limit_neighbors=True) others = self.get_agents(state_id=Patron.at_home.id, limit_neighbors=True)
self.debug('I\'m home. Just like {} of my friends'.format(len(others))) self.debug("I'm home. Just like {} of my friends".format(len(others)))
def drink(self): def drink(self):
self['pints'] += 1 self["pints"] += 1
self.debug('Cheers to that') self.debug("Cheers to that")
def kick_out(self): def kick_out(self):
self.set_state(self.at_home) self.kicked_out = True
def befriend(self, other_agent, force=False): def befriend(self, other_agent, force=False):
''' """
Try to become friends with another agent. The chances of Try to become friends with another agent. The chances of
success depend on both agents' openness. success depend on both agents' openness.
''' """
if force or self['openness'] > random(): if force or self["openness"] > self.random.random():
self.env.add_edge(self, other_agent) self.add_edge(self, other_agent)
self.info('Made some friend {}'.format(other_agent)) self.info("Made some friend {}".format(other_agent))
return True return True
return False return False
def try_friends(self, others): def try_friends(self, others):
''' Look for random agents around me and try to befriend them''' """Look for random agents around me and try to befriend them"""
befriended = False befriended = False
k = int(10*self['openness']) k = int(10 * self["openness"])
shuffle(others) self.random.shuffle(others)
for friend in islice(others, k): # random.choice >= 3.7 for friend in islice(others, k): # random.choice >= 3.7
if friend == self: if friend == self:
continue continue
if friend.befriend(self): if friend.befriend(self):
self.befriend(friend, force=True) self.befriend(friend, force=True)
self.debug('Hooray! new friend: {}'.format(friend.id)) self.debug("Hooray! new friend: {}".format(friend.id))
befriended = True befriended = True
else: else:
self.debug('{} does not want to be friends'.format(friend.id)) self.debug("{} does not want to be friends".format(friend.id))
return befriended return befriended
class Police(FSM): class Police(FSM):
'''Simple agent to take drunk people out of pubs.''' """Simple agent to take drunk people out of pubs."""
level = logging.INFO level = logging.INFO
@default_state @default_state
@state @state
def patrol(self): def patrol(self):
drunksters = list(self.get_agents(drunk=True, drunksters = list(self.get_agents(drunk=True, state_id=Patron.drunk_in_pub.id))
state_id=Patron.drunk_in_pub.id))
for drunk in drunksters: for drunk in drunksters:
self.info('Kicking out the trash: {}'.format(drunk.id)) self.info("Kicking out the trash: {}".format(drunk.id))
drunk.kick_out() drunk.kick_out()
else: else:
self.info('No trash to take out. Too bad.') self.info("No trash to take out. Too bad.")
if __name__ == '__main__': if __name__ == "__main__":
from soil import simulation from soil import simulation
simulation.run_from_config('pubcrawl.yml',
dry_run=True, simulation.run_from_config("pubcrawl.yml", dry_run=True, dump=None, parallel=False)
dump=None,
parallel=False)

View File

@@ -1,25 +1,25 @@
--- ---
name: pubcrawl name: pubcrawl
num_trials: 3 num_trials: 3
max_time: 10 max_steps: 10
dump: false dump: false
network_params: network_params:
# Generate 100 empty nodes. They will be assigned a network agent # Generate 100 empty nodes. They will be assigned a network agent
generator: empty_graph generator: empty_graph
n: 30 n: 30
network_agents: network_agents:
- agent_type: pubcrawl.Patron - agent_class: pubcrawl.Patron
description: Extroverted patron description: Extroverted patron
state: state:
openness: 1.0 openness: 1.0
weight: 9 weight: 9
- agent_type: pubcrawl.Patron - agent_class: pubcrawl.Patron
description: Introverted patron description: Introverted patron
state: state:
openness: 0.1 openness: 0.1
weight: 1 weight: 1
environment_agents: environment_agents:
- agent_type: pubcrawl.Police - agent_class: pubcrawl.Police
environment_class: pubcrawl.CityPubs environment_class: pubcrawl.CityPubs
environment_params: environment_params:
altercations: 0 altercations: 0

View File

@@ -0,0 +1,14 @@
There are two similar implementations of this simulation.
- `basic`. Using simple primites
- `improved`. Using more advanced features such as the `time` module to avoid unnecessary computations (i.e., skip steps), and generator functions.
The examples can be run directly in the terminal, and they accept command like arguments.
For example, to enable the CSV exporter and the Summary exporter, while setting `max_time` to `100` and `seed` to `CustomSeed`:
```
python rabbit_agents.py --set max_time=100 --csv -e summary --set 'seed="CustomSeed"'
```
To learn more about how this functionality works, check out the `soil.easy` function.

View File

@@ -0,0 +1,150 @@
from soil import FSM, state, default_state, BaseAgent, NetworkAgent, Environment
from collections import Counter
import logging
import math
class RabbitEnv(Environment):
@property
def num_rabbits(self):
return self.count_agents(agent_class=Rabbit)
@property
def num_males(self):
return self.count_agents(agent_class=Male)
@property
def num_females(self):
return self.count_agents(agent_class=Female)
class Rabbit(NetworkAgent, FSM):
sexual_maturity = 30
life_expectancy = 300
@default_state
@state
def newborn(self):
self.info("I am a newborn.")
self.age = 0
self.offspring = 0
return self.youngling
@state
def youngling(self):
self.age += 1
if self.age >= self.sexual_maturity:
self.info(f"I am fertile! My age is {self.age}")
return self.fertile
@state
def fertile(self):
raise Exception("Each subclass should define its fertile state")
@state
def dead(self):
self.die()
class Male(Rabbit):
max_females = 5
mating_prob = 0.001
@state
def fertile(self):
self.age += 1
if self.age > self.life_expectancy:
return self.dead
# Males try to mate
for f in self.model.agents(
agent_class=Female, state_id=Female.fertile.id, limit=self.max_females
):
self.debug("FOUND A FEMALE: ", repr(f), self.mating_prob)
if self.prob(self["mating_prob"]):
f.impregnate(self)
break # Take a break
class Female(Rabbit):
gestation = 10
pregnancy = -1
@state
def fertile(self):
# Just wait for a Male
self.age += 1
if self.age > self.life_expectancy:
return self.dead
if self.pregnancy >= 0:
return self.pregnant
def impregnate(self, male):
self.info(f"impregnated by {repr(male)}")
self.mate = male
self.pregnancy = 0
self.number_of_babies = int(8 + 4 * self.random.random())
@state
def pregnant(self):
self.info("I am pregnant")
self.age += 1
if self.age >= self.life_expectancy:
return self.die()
if self.pregnancy < self.gestation:
self.pregnancy += 1
return
self.info("Having {} babies".format(self.number_of_babies))
for i in range(self.number_of_babies):
state = {}
agent_class = self.random.choice([Male, Female])
child = self.model.add_node(agent_class=agent_class, **state)
child.add_edge(self)
try:
child.add_edge(self.mate)
self.model.agents[self.mate].offspring += 1
except ValueError:
self.debug("The father has passed away")
self.offspring += 1
self.mate = None
self.pregnancy = -1
return self.fertile
def die(self):
if "pregnancy" in self and self["pregnancy"] > -1:
self.info("A mother has died carrying a baby!!")
return super().die()
class RandomAccident(BaseAgent):
def step(self):
rabbits_alive = self.model.G.number_of_nodes()
if not rabbits_alive:
return self.die()
prob_death = self.model.get("prob_death", 1e-100) * math.floor(
math.log10(max(1, rabbits_alive))
)
self.debug("Killing some rabbits with prob={}!".format(prob_death))
for i in self.get_agents(agent_class=Rabbit):
if i.state_id == i.dead.id:
continue
if self.prob(prob_death):
self.info("I killed a rabbit: {}".format(i.id))
rabbits_alive -= 1
i.die()
self.debug("Rabbits alive: {}".format(rabbits_alive))
if __name__ == "__main__":
from soil import easy
with easy("rabbits.yml") as sim:
sim.run()

View File

@@ -0,0 +1,42 @@
---
version: '2'
name: rabbits_basic
num_trials: 1
seed: MySeed
description: null
group: null
interval: 1.0
max_time: 100
model_class: rabbit_agents.RabbitEnv
model_params:
agents:
topology: true
distribution:
- agent_class: rabbit_agents.Male
weight: 1
- agent_class: rabbit_agents.Female
weight: 1
fixed:
- agent_class: rabbit_agents.RandomAccident
topology: false
hidden: true
state:
group: environment
state:
group: network
mating_prob: 0.1
prob_death: 0.001
topology:
fixed:
directed: true
links: []
nodes:
- id: 1
- id: 0
model_reporters:
num_males: 'num_males'
num_females: 'num_females'
num_rabbits: |
py:lambda env: env.num_males + env.num_females
extra:
visualization_params: {}

View File

@@ -0,0 +1,157 @@
from soil import FSM, state, default_state, BaseAgent, NetworkAgent, Environment
from soil.time import Delta
from enum import Enum
from collections import Counter
import logging
import math
class RabbitEnv(Environment):
@property
def num_rabbits(self):
return self.count_agents(agent_class=Rabbit)
@property
def num_males(self):
return self.count_agents(agent_class=Male)
@property
def num_females(self):
return self.count_agents(agent_class=Female)
class Rabbit(FSM, NetworkAgent):
sexual_maturity = 30
life_expectancy = 300
birth = None
@property
def age(self):
if self.birth is None:
return None
return self.now - self.birth
@default_state
@state
def newborn(self):
self.info("I am a newborn.")
self.birth = self.now
self.offspring = 0
return self.youngling, Delta(self.sexual_maturity - self.age)
@state
def youngling(self):
if self.age >= self.sexual_maturity:
self.info(f"I am fertile! My age is {self.age}")
return self.fertile
@state
def fertile(self):
raise Exception("Each subclass should define its fertile state")
@state
def dead(self):
self.die()
class Male(Rabbit):
max_females = 5
mating_prob = 0.001
@state
def fertile(self):
if self.age > self.life_expectancy:
return self.dead
# Males try to mate
for f in self.model.agents(
agent_class=Female, state_id=Female.fertile.id, limit=self.max_females
):
self.debug("FOUND A FEMALE: ", repr(f), self.mating_prob)
if self.prob(self["mating_prob"]):
f.impregnate(self)
break # Do not try to impregnate other females
class Female(Rabbit):
gestation = 10
conception = None
@state
def fertile(self):
# Just wait for a Male
if self.age > self.life_expectancy:
return self.dead
if self.conception is not None:
return self.pregnant
@property
def pregnancy(self):
if self.conception is None:
return None
return self.now - self.conception
def impregnate(self, male):
self.info(f"impregnated by {repr(male)}")
self.mate = male
self.conception = self.now
self.number_of_babies = int(8 + 4 * self.random.random())
@state
def pregnant(self):
self.debug("I am pregnant")
if self.age > self.life_expectancy:
self.info("Dying before giving birth")
return self.die()
if self.pregnancy >= self.gestation:
self.info("Having {} babies".format(self.number_of_babies))
for i in range(self.number_of_babies):
state = {}
agent_class = self.random.choice([Male, Female])
child = self.model.add_node(agent_class=agent_class, **state)
child.add_edge(self)
if self.mate:
child.add_edge(self.mate)
self.mate.offspring += 1
else:
self.debug("The father has passed away")
self.offspring += 1
self.mate = None
return self.fertile
def die(self):
if self.pregnancy is not None:
self.info("A mother has died carrying a baby!!")
return super().die()
class RandomAccident(BaseAgent):
def step(self):
rabbits_alive = self.model.G.number_of_nodes()
if not rabbits_alive:
return self.die()
prob_death = self.model.get("prob_death", 1e-100) * math.floor(
math.log10(max(1, rabbits_alive))
)
self.debug("Killing some rabbits with prob={}!".format(prob_death))
for i in self.iter_agents(agent_class=Rabbit):
if i.state_id == i.dead.id:
continue
if self.prob(prob_death):
self.info("I killed a rabbit: {}".format(i.id))
rabbits_alive -= 1
i.die()
self.debug("Rabbits alive: {}".format(rabbits_alive))
if __name__ == "__main__":
from soil import easy
with easy("rabbits.yml") as sim:
sim.run()

View File

@@ -0,0 +1,42 @@
---
version: '2'
name: rabbits_improved
num_trials: 1
seed: MySeed
description: null
group: null
interval: 1.0
max_time: 100
model_class: rabbit_agents.RabbitEnv
model_params:
agents:
topology: true
distribution:
- agent_class: rabbit_agents.Male
weight: 1
- agent_class: rabbit_agents.Female
weight: 1
fixed:
- agent_class: rabbit_agents.RandomAccident
topology: false
hidden: true
state:
group: environment
state:
group: network
mating_prob: 0.1
prob_death: 0.001
topology:
fixed:
directed: true
links: []
nodes:
- id: 1
- id: 0
model_reporters:
num_males: 'num_males'
num_females: 'num_females'
num_rabbits: |
py:lambda env: env.num_males + env.num_females
extra:
visualization_params: {}

View File

@@ -1,122 +0,0 @@
from soil.agents import FSM, state, default_state, BaseAgent, NetworkAgent
from enum import Enum
from random import random, choice
import logging
import math
class Genders(Enum):
male = 'male'
female = 'female'
class RabbitModel(FSM):
level = logging.INFO
defaults = {
'age': 0,
'gender': Genders.male.value,
'mating_prob': 0.001,
'offspring': 0,
}
sexual_maturity = 3 #4*30
life_expectancy = 365 * 3
gestation = 33
pregnancy = -1
max_females = 5
@default_state
@state
def newborn(self):
self.debug(f'I am a newborn at age {self["age"]}')
self['age'] += 1
if self['age'] >= self.sexual_maturity:
self.debug('I am fertile!')
return self.fertile
@state
def fertile(self):
self['age'] += 1
if self['age'] > self.life_expectancy:
return self.dead
if self['gender'] == Genders.female.value:
return
# Males try to mate
for f in self.get_agents(state_id=self.fertile.id, gender=Genders.female.value, limit_neighbors=False, limit=self.max_females):
r = random()
if r < self['mating_prob']:
self.impregnate(f)
break # Take a break
def impregnate(self, whom):
if self['gender'] == Genders.female.value:
raise NotImplementedError('Females cannot impregnate')
whom['pregnancy'] = 0
whom['mate'] = self.id
whom.set_state(whom.pregnant)
self.debug('{} impregnating: {}. {}'.format(self.id, whom.id, whom.state))
@state
def pregnant(self):
self['age'] += 1
if self['age'] > self.life_expectancy:
return self.dead
self['pregnancy'] += 1
self.debug('Pregnancy: {}'.format(self['pregnancy']))
if self['pregnancy'] >= self.gestation:
number_of_babies = int(8+4*random())
self.info('Having {} babies'.format(number_of_babies))
for i in range(number_of_babies):
state = {}
state['gender'] = choice(list(Genders)).value
child = self.env.add_node(self.__class__, state)
self.env.add_edge(self.id, child.id)
self.env.add_edge(self['mate'], child.id)
# self.add_edge()
self.debug('A BABY IS COMING TO LIFE')
self.env['rabbits_alive'] = self.env.get('rabbits_alive', self.topology.number_of_nodes())+1
self.debug('Rabbits alive: {}'.format(self.env['rabbits_alive']))
self['offspring'] += 1
self.env.get_agent(self['mate'])['offspring'] += 1
del self['mate']
self['pregnancy'] = -1
return self.fertile
@state
def dead(self):
self.info('Agent {} is dying'.format(self.id))
if 'pregnancy' in self and self['pregnancy'] > -1:
self.info('A mother has died carrying a baby!!')
self.die()
return
class RandomAccident(NetworkAgent):
level = logging.DEBUG
def step(self):
rabbits_total = self.topology.number_of_nodes()
if 'rabbits_alive' not in self.env:
self.env['rabbits_alive'] = 0
rabbits_alive = self.env.get('rabbits_alive', rabbits_total)
prob_death = self.env.get('prob_death', 1e-100)*math.floor(math.log10(max(1, rabbits_alive)))
self.debug('Killing some rabbits with prob={}!'.format(prob_death))
for i in self.env.network_agents:
if i.state['id'] == i.dead.id:
continue
r = random()
if r < prob_death:
self.debug('I killed a rabbit: {}'.format(i.id))
rabbits_alive = self.env['rabbits_alive'] = rabbits_alive -1
self.log('Rabbits alive: {}'.format(self.env['rabbits_alive']))
i.set_state(i.dead)
self.log('Rabbits alive: {}/{}'.format(rabbits_alive, rabbits_total))
if self.count_agents(state_id=RabbitModel.dead.id) == self.topology.number_of_nodes():
self.die()

View File

@@ -1,23 +0,0 @@
---
load_module: rabbit_agents
name: rabbits_example
max_time: 150
interval: 1
seed: MySeed
agent_type: RabbitModel
environment_agents:
- agent_type: RandomAccident
environment_params:
prob_death: 0.001
default_state:
mating_prob: 0.01
topology:
nodes:
- id: 1
state:
gender: female
- id: 0
state:
gender: male
directed: true
links: []

View File

@@ -1,45 +1,43 @@
''' """
Example of setting a Example of setting a
Example of a fully programmatic simulation, without definition files. Example of a fully programmatic simulation, without definition files.
''' """
from soil import Simulation, agents from soil import Simulation, agents
from soil.time import Delta from soil.time import Delta
from random import expovariate
import logging
class MyAgent(agents.FSM): class MyAgent(agents.FSM):
''' """
An agent that first does a ping An agent that first does a ping
''' """
defaults = {'pong_counts': 2} defaults = {"pong_counts": 2}
@agents.default_state @agents.default_state
@agents.state @agents.state
def ping(self): def ping(self):
self.info('Ping') self.info("Ping")
return self.pong, Delta(expovariate(1/16)) return self.pong, Delta(self.random.expovariate(1 / 16))
@agents.state @agents.state
def pong(self): def pong(self):
self.info('Pong') self.info("Pong")
self.pong_counts -= 1 self.pong_counts -= 1
self.info(str(self.pong_counts)) self.info(str(self.pong_counts))
if self.pong_counts < 1: if self.pong_counts < 1:
return self.die() return self.die()
return None, Delta(expovariate(1/16)) return None, Delta(self.random.expovariate(1 / 16))
s = Simulation(name='Programmatic', s = Simulation(
network_agents=[{'agent_type': MyAgent, 'id': 0}], name="Programmatic",
topology={'nodes': [{'id': 0}], 'links': []}, network_agents=[{"agent_class": MyAgent, "id": 0}],
num_trials=1, topology={"nodes": [{"id": 0}], "links": []},
max_time=100, num_trials=1,
agent_type=MyAgent, max_time=100,
dry_run=True) agent_class=MyAgent,
dry_run=True,
)
logging.basicConfig(level=logging.INFO)
envs = s.run() envs = s.run()

View File

@@ -6,20 +6,20 @@ template:
group: simple group: simple
num_trials: 1 num_trials: 1
interval: 1 interval: 1
max_time: 2 max_steps: 2
seed: "CompleteSeed!" seed: "CompleteSeed!"
dump: false dump: false
network_params: model_params:
generator: complete_graph network_params:
n: 10 generator: complete_graph
network_agents: n: 10
- agent_type: CounterModel network_agents:
weight: "{{ x1 }}" - agent_class: CounterModel
state: weight: "{{ x1 }}"
state_id: 0 state:
- agent_type: AggregatedCounter state_id: 0
weight: "{{ 1 - x1 }}" - agent_class: AggregatedCounter
environment_params: weight: "{{ 1 - x1 }}"
name: "{{ x3 }}" name: "{{ x3 }}"
skip_test: true skip_test: true
vars: vars:

View File

@@ -1,4 +1,3 @@
import random
import networkx as nx import networkx as nx
from soil.agents import Geo, NetworkAgent, FSM, state, default_state from soil.agents import Geo, NetworkAgent, FSM, state, default_state
from soil import Environment from soil import Environment
@@ -21,56 +20,83 @@ class TerroristSpreadModel(FSM, Geo):
def __init__(self, model=None, unique_id=0, state=()): def __init__(self, model=None, unique_id=0, state=()):
super().__init__(model=model, unique_id=unique_id, state=state) super().__init__(model=model, unique_id=unique_id, state=state)
self.information_spread_intensity = model.environment_params['information_spread_intensity'] self.information_spread_intensity = model.environment_params[
self.terrorist_additional_influence = model.environment_params['terrorist_additional_influence'] "information_spread_intensity"
self.prob_interaction = model.environment_params['prob_interaction'] ]
self.terrorist_additional_influence = model.environment_params[
"terrorist_additional_influence"
]
self.prob_interaction = model.environment_params["prob_interaction"]
if self['id'] == self.civilian.id: # Civilian if self["id"] == self.civilian.id: # Civilian
self.mean_belief = random.uniform(0.00, 0.5) self.mean_belief = self.random.uniform(0.00, 0.5)
elif self['id'] == self.terrorist.id: # Terrorist elif self["id"] == self.terrorist.id: # Terrorist
self.mean_belief = random.uniform(0.8, 1.00) self.mean_belief = self.random.uniform(0.8, 1.00)
elif self['id'] == self.leader.id: # Leader elif self["id"] == self.leader.id: # Leader
self.mean_belief = 1.00 self.mean_belief = 1.00
else: else:
raise Exception('Invalid state id: {}'.format(self['id'])) raise Exception("Invalid state id: {}".format(self["id"]))
if 'min_vulnerability' in model.environment_params:
self.vulnerability = random.uniform( model.environment_params['min_vulnerability'], model.environment_params['max_vulnerability'] )
else :
self.vulnerability = random.uniform( 0, model.environment_params['max_vulnerability'] )
if "min_vulnerability" in model.environment_params:
self.vulnerability = self.random.uniform(
model.environment_params["min_vulnerability"],
model.environment_params["max_vulnerability"],
)
else:
self.vulnerability = self.random.uniform(
0, model.environment_params["max_vulnerability"]
)
@state @state
def civilian(self): def civilian(self):
neighbours = list(self.get_neighboring_agents(agent_type=TerroristSpreadModel)) neighbours = list(self.get_neighbors(agent_class=TerroristSpreadModel))
if len(neighbours) > 0: if len(neighbours) > 0:
# Only interact with some of the neighbors # Only interact with some of the neighbors
interactions = list(n for n in neighbours if random.random() <= self.prob_interaction) interactions = list(
influence = sum( self.degree(i) for i in interactions ) n for n in neighbours if self.random.random() <= self.prob_interaction
mean_belief = sum( i.mean_belief * self.degree(i) / influence for i in interactions ) )
mean_belief = mean_belief * self.information_spread_intensity + self.mean_belief * ( 1 - self.information_spread_intensity ) influence = sum(self.degree(i) for i in interactions)
self.mean_belief = mean_belief * self.vulnerability + self.mean_belief * ( 1 - self.vulnerability ) mean_belief = sum(
i.mean_belief * self.degree(i) / influence for i in interactions
)
mean_belief = (
mean_belief * self.information_spread_intensity
+ self.mean_belief * (1 - self.information_spread_intensity)
)
self.mean_belief = mean_belief * self.vulnerability + self.mean_belief * (
1 - self.vulnerability
)
if self.mean_belief >= 0.8: if self.mean_belief >= 0.8:
return self.terrorist return self.terrorist
@state @state
def leader(self): def leader(self):
self.mean_belief = self.mean_belief ** ( 1 - self.terrorist_additional_influence ) self.mean_belief = self.mean_belief ** (1 - self.terrorist_additional_influence)
for neighbour in self.get_neighboring_agents(state_id=[self.terrorist.id, self.leader.id]): for neighbour in self.get_neighbors(
state_id=[self.terrorist.id, self.leader.id]
):
if self.betweenness(neighbour) > self.betweenness(self): if self.betweenness(neighbour) > self.betweenness(self):
return self.terrorist return self.terrorist
@state @state
def terrorist(self): def terrorist(self):
neighbours = self.get_agents(state_id=[self.terrorist.id, self.leader.id], neighbours = self.get_agents(
agent_type=TerroristSpreadModel, state_id=[self.terrorist.id, self.leader.id],
limit_neighbors=True) agent_class=TerroristSpreadModel,
limit_neighbors=True,
)
if len(neighbours) > 0: if len(neighbours) > 0:
influence = sum( self.degree(n) for n in neighbours ) influence = sum(self.degree(n) for n in neighbours)
mean_belief = sum( n.mean_belief * self.degree(n) / influence for n in neighbours ) mean_belief = sum(
mean_belief = mean_belief * self.vulnerability + self.mean_belief * ( 1 - self.vulnerability ) n.mean_belief * self.degree(n) / influence for n in neighbours
self.mean_belief = self.mean_belief ** ( 1 - self.terrorist_additional_influence ) )
mean_belief = mean_belief * self.vulnerability + self.mean_belief * (
1 - self.vulnerability
)
self.mean_belief = self.mean_belief ** (
1 - self.terrorist_additional_influence
)
# Check if there are any leaders in the group # Check if there are any leaders in the group
leaders = list(filter(lambda x: x.state.id == self.leader.id, neighbours)) leaders = list(filter(lambda x: x.state.id == self.leader.id, neighbours))
@@ -82,6 +108,34 @@ class TerroristSpreadModel(FSM, Geo):
return return
return self.leader return self.leader
def ego_search(self, steps=1, center=False, node=None, **kwargs):
"""Get a list of nodes in the ego network of *node* of radius *steps*"""
node = as_node(node if node is not None else self)
G = self.subgraph(**kwargs)
return nx.ego_graph(G, node, center=center, radius=steps).nodes()
def degree(self, node, force=False):
node = as_node(node)
if (
force
or (not hasattr(self.model, "_degree"))
or getattr(self.model, "_last_step", 0) < self.now
):
self.model._degree = nx.degree_centrality(self.G)
self.model._last_step = self.now
return self.model._degree[node]
def betweenness(self, node, force=False):
node = as_node(node)
if (
force
or (not hasattr(self.model, "_betweenness"))
or getattr(self.model, "_last_step", 0) < self.now
):
self.model._betweenness = nx.betweenness_centrality(self.G)
self.model._last_step = self.now
return self.model._betweenness[node]
class TrainingAreaModel(FSM, Geo): class TrainingAreaModel(FSM, Geo):
""" """
@@ -95,17 +149,20 @@ class TrainingAreaModel(FSM, Geo):
def __init__(self, model=None, unique_id=0, state=()): def __init__(self, model=None, unique_id=0, state=()):
super().__init__(model=model, unique_id=unique_id, state=state) super().__init__(model=model, unique_id=unique_id, state=state)
self.training_influence = model.environment_params['training_influence'] self.training_influence = model.environment_params["training_influence"]
if 'min_vulnerability' in model.environment_params: if "min_vulnerability" in model.environment_params:
self.min_vulnerability = model.environment_params['min_vulnerability'] self.min_vulnerability = model.environment_params["min_vulnerability"]
else: self.min_vulnerability = 0 else:
self.min_vulnerability = 0
@default_state @default_state
@state @state
def terrorist(self): def terrorist(self):
for neighbour in self.get_neighboring_agents(agent_type=TerroristSpreadModel): for neighbour in self.get_neighbors(agent_class=TerroristSpreadModel):
if neighbour.vulnerability > self.min_vulnerability: if neighbour.vulnerability > self.min_vulnerability:
neighbour.vulnerability = neighbour.vulnerability ** ( 1 - self.training_influence ) neighbour.vulnerability = neighbour.vulnerability ** (
1 - self.training_influence
)
class HavenModel(FSM, Geo): class HavenModel(FSM, Geo):
@@ -122,14 +179,15 @@ class HavenModel(FSM, Geo):
def __init__(self, model=None, unique_id=0, state=()): def __init__(self, model=None, unique_id=0, state=()):
super().__init__(model=model, unique_id=unique_id, state=state) super().__init__(model=model, unique_id=unique_id, state=state)
self.haven_influence = model.environment_params['haven_influence'] self.haven_influence = model.environment_params["haven_influence"]
if 'min_vulnerability' in model.environment_params: if "min_vulnerability" in model.environment_params:
self.min_vulnerability = model.environment_params['min_vulnerability'] self.min_vulnerability = model.environment_params["min_vulnerability"]
else: self.min_vulnerability = 0 else:
self.max_vulnerability = model.environment_params['max_vulnerability'] self.min_vulnerability = 0
self.max_vulnerability = model.environment_params["max_vulnerability"]
def get_occupants(self, **kwargs): def get_occupants(self, **kwargs):
return self.get_neighboring_agents(agent_type=TerroristSpreadModel, **kwargs) return self.get_neighbors(agent_class=TerroristSpreadModel, **kwargs)
@state @state
def civilian(self): def civilian(self):
@@ -139,14 +197,18 @@ class HavenModel(FSM, Geo):
for neighbour in self.get_occupants(): for neighbour in self.get_occupants():
if neighbour.vulnerability > self.min_vulnerability: if neighbour.vulnerability > self.min_vulnerability:
neighbour.vulnerability = neighbour.vulnerability * ( 1 - self.haven_influence ) neighbour.vulnerability = neighbour.vulnerability * (
1 - self.haven_influence
)
return self.civilian return self.civilian
@state @state
def terrorist(self): def terrorist(self):
for neighbour in self.get_occupants(): for neighbour in self.get_occupants():
if neighbour.vulnerability < self.max_vulnerability: if neighbour.vulnerability < self.max_vulnerability:
neighbour.vulnerability = neighbour.vulnerability ** ( 1 - self.haven_influence ) neighbour.vulnerability = neighbour.vulnerability ** (
1 - self.haven_influence
)
return self.terrorist return self.terrorist
@@ -165,10 +227,10 @@ class TerroristNetworkModel(TerroristSpreadModel):
def __init__(self, model=None, unique_id=0, state=()): def __init__(self, model=None, unique_id=0, state=()):
super().__init__(model=model, unique_id=unique_id, state=state) super().__init__(model=model, unique_id=unique_id, state=state)
self.vision_range = model.environment_params['vision_range'] self.vision_range = model.environment_params["vision_range"]
self.sphere_influence = model.environment_params['sphere_influence'] self.sphere_influence = model.environment_params["sphere_influence"]
self.weight_social_distance = model.environment_params['weight_social_distance'] self.weight_social_distance = model.environment_params["weight_social_distance"]
self.weight_link_distance = model.environment_params['weight_link_distance'] self.weight_link_distance = model.environment_params["weight_link_distance"]
@state @state
def terrorist(self): def terrorist(self):
@@ -181,28 +243,47 @@ class TerroristNetworkModel(TerroristSpreadModel):
return super().leader() return super().leader()
def update_relationships(self): def update_relationships(self):
if self.count_neighboring_agents(state_id=self.civilian.id) == 0: if self.count_neighbors(state_id=self.civilian.id) == 0:
close_ups = set(self.geo_search(radius=self.vision_range, agent_type=TerroristNetworkModel)) close_ups = set(
step_neighbours = set(self.ego_search(self.sphere_influence, agent_type=TerroristNetworkModel, center=False)) self.geo_search(
neighbours = set(agent.id for agent in self.get_neighboring_agents(agent_type=TerroristNetworkModel)) radius=self.vision_range, agent_class=TerroristNetworkModel
)
)
step_neighbours = set(
self.ego_search(
self.sphere_influence,
agent_class=TerroristNetworkModel,
center=False,
)
)
neighbours = set(
agent.id
for agent in self.get_neighbors(agent_class=TerroristNetworkModel)
)
search = (close_ups | step_neighbours) - neighbours search = (close_ups | step_neighbours) - neighbours
for agent in self.get_agents(search): for agent in self.get_agents(search):
social_distance = 1 / self.shortest_path_length(agent.id) social_distance = 1 / self.shortest_path_length(agent.id)
spatial_proximity = ( 1 - self.get_distance(agent.id) ) spatial_proximity = 1 - self.get_distance(agent.id)
prob_new_interaction = self.weight_social_distance * social_distance + self.weight_link_distance * spatial_proximity prob_new_interaction = (
if agent['id'] == agent.civilian.id and random.random() < prob_new_interaction: self.weight_social_distance * social_distance
+ self.weight_link_distance * spatial_proximity
)
if (
agent["id"] == agent.civilian.id
and self.random.random() < prob_new_interaction
):
self.add_edge(agent) self.add_edge(agent)
break break
def get_distance(self, target): def get_distance(self, target):
source_x, source_y = nx.get_node_attributes(self.topology, 'pos')[self.id] source_x, source_y = nx.get_node_attributes(self.G, "pos")[self.id]
target_x, target_y = nx.get_node_attributes(self.topology, 'pos')[target] target_x, target_y = nx.get_node_attributes(self.G, "pos")[target]
dx = abs( source_x - target_x ) dx = abs(source_x - target_x)
dy = abs( source_y - target_y ) dy = abs(source_y - target_y)
return ( dx ** 2 + dy ** 2 ) ** ( 1 / 2 ) return (dx**2 + dy**2) ** (1 / 2)
def shortest_path_length(self, target): def shortest_path_length(self, target):
try: try:
return nx.shortest_path_length(self.topology, self.id, target) return nx.shortest_path_length(self.G, self.id, target)
except nx.NetworkXNoPath: except nx.NetworkXNoPath:
return float('inf') return float("inf")

View File

@@ -1,32 +1,31 @@
name: TerroristNetworkModel_sim name: TerroristNetworkModel_sim
load_module: TerroristNetworkModel max_steps: 150
max_time: 150
num_trials: 1 num_trials: 1
network_params: model_params:
generator: random_geometric_graph network_params:
radius: 0.2 generator: random_geometric_graph
# generator: geographical_threshold_graph radius: 0.2
# theta: 20 # generator: geographical_threshold_graph
n: 100 # theta: 20
network_agents: n: 100
- agent_type: TerroristNetworkModel network_agents:
weight: 0.8 - agent_class: TerroristNetworkModel.TerroristNetworkModel
state: weight: 0.8
id: civilian # Civilians state:
- agent_type: TerroristNetworkModel id: civilian # Civilians
weight: 0.1 - agent_class: TerroristNetworkModel.TerroristNetworkModel
state: weight: 0.1
id: leader # Leaders state:
- agent_type: TrainingAreaModel id: leader # Leaders
weight: 0.05 - agent_class: TerroristNetworkModel.TrainingAreaModel
state: weight: 0.05
id: terrorist # Terrorism state:
- agent_type: HavenModel id: terrorist # Terrorism
weight: 0.05 - agent_class: TerroristNetworkModel.HavenModel
state: weight: 0.05
id: civilian # Civilian state:
id: civilian # Civilian
environment_params:
# TerroristSpreadModel # TerroristSpreadModel
information_spread_intensity: 0.7 information_spread_intensity: 0.7
terrorist_additional_influence: 0.035 terrorist_additional_influence: 0.035

View File

@@ -1,14 +1,15 @@
--- ---
name: torvalds_example name: torvalds_example
max_time: 10 max_steps: 10
interval: 2 interval: 2
agent_type: CounterModel model_params:
default_state: agent_class: CounterModel
skill_level: 'beginner' default_state:
network_params: skill_level: 'beginner'
path: 'torvalds.edgelist' network_params:
states: path: 'torvalds.edgelist'
Torvalds: states:
skill_level: 'God' Torvalds:
balkian: skill_level: 'God'
skill_level: 'developer' balkian:
skill_level: 'developer'

View File

@@ -12330,11 +12330,11 @@ Notice how node 0 is the only one with a TV.</p>
<span class="n">sim</span> <span class="o">=</span> <span class="n">soil</span><span class="o">.</span><span class="n">Simulation</span><span class="p">(</span><span class="n">topology</span><span class="o">=</span><span class="n">G</span><span class="p">,</span> <span class="n">sim</span> <span class="o">=</span> <span class="n">soil</span><span class="o">.</span><span class="n">Simulation</span><span class="p">(</span><span class="n">topology</span><span class="o">=</span><span class="n">G</span><span class="p">,</span>
<span class="n">num_trials</span><span class="o">=</span><span class="mi">1</span><span class="p">,</span> <span class="n">num_trials</span><span class="o">=</span><span class="mi">1</span><span class="p">,</span>
<span class="n">max_time</span><span class="o">=</span><span class="n">MAX_TIME</span><span class="p">,</span> <span class="n">max_time</span><span class="o">=</span><span class="n">MAX_TIME</span><span class="p">,</span>
<span class="n">environment_agents</span><span class="o">=</span><span class="p">[{</span><span class="s1">&#39;agent_type&#39;</span><span class="p">:</span> <span class="n">NewsEnvironmentAgent</span><span class="p">,</span> <span class="n">environment_agents</span><span class="o">=</span><span class="p">[{</span><span class="s1">&#39;agent_class&#39;</span><span class="p">:</span> <span class="n">NewsEnvironmentAgent</span><span class="p">,</span>
<span class="s1">&#39;state&#39;</span><span class="p">:</span> <span class="p">{</span> <span class="s1">&#39;state&#39;</span><span class="p">:</span> <span class="p">{</span>
<span class="s1">&#39;event_time&#39;</span><span class="p">:</span> <span class="n">EVENT_TIME</span> <span class="s1">&#39;event_time&#39;</span><span class="p">:</span> <span class="n">EVENT_TIME</span>
<span class="p">}}],</span> <span class="p">}}],</span>
<span class="n">network_agents</span><span class="o">=</span><span class="p">[{</span><span class="s1">&#39;agent_type&#39;</span><span class="p">:</span> <span class="n">NewsSpread</span><span class="p">,</span> <span class="n">network_agents</span><span class="o">=</span><span class="p">[{</span><span class="s1">&#39;agent_class&#39;</span><span class="p">:</span> <span class="n">NewsSpread</span><span class="p">,</span>
<span class="s1">&#39;weight&#39;</span><span class="p">:</span> <span class="mi">1</span><span class="p">}],</span> <span class="s1">&#39;weight&#39;</span><span class="p">:</span> <span class="mi">1</span><span class="p">}],</span>
<span class="n">states</span><span class="o">=</span><span class="p">{</span><span class="mi">0</span><span class="p">:</span> <span class="p">{</span><span class="s1">&#39;has_tv&#39;</span><span class="p">:</span> <span class="kc">True</span><span class="p">}},</span> <span class="n">states</span><span class="o">=</span><span class="p">{</span><span class="mi">0</span><span class="p">:</span> <span class="p">{</span><span class="s1">&#39;has_tv&#39;</span><span class="p">:</span> <span class="kc">True</span><span class="p">}},</span>
<span class="n">default_state</span><span class="o">=</span><span class="p">{</span><span class="s1">&#39;has_tv&#39;</span><span class="p">:</span> <span class="kc">False</span><span class="p">},</span> <span class="n">default_state</span><span class="o">=</span><span class="p">{</span><span class="s1">&#39;has_tv&#39;</span><span class="p">:</span> <span class="kc">False</span><span class="p">},</span>
@@ -12468,14 +12468,14 @@ For this demo, we will use a python dictionary:</p>
<span class="p">},</span> <span class="p">},</span>
<span class="s1">&#39;network_agents&#39;</span><span class="p">:</span> <span class="p">[</span> <span class="s1">&#39;network_agents&#39;</span><span class="p">:</span> <span class="p">[</span>
<span class="p">{</span> <span class="p">{</span>
<span class="s1">&#39;agent_type&#39;</span><span class="p">:</span> <span class="n">NewsSpread</span><span class="p">,</span> <span class="s1">&#39;agent_class&#39;</span><span class="p">:</span> <span class="n">NewsSpread</span><span class="p">,</span>
<span class="s1">&#39;weight&#39;</span><span class="p">:</span> <span class="mi">1</span><span class="p">,</span> <span class="s1">&#39;weight&#39;</span><span class="p">:</span> <span class="mi">1</span><span class="p">,</span>
<span class="s1">&#39;state&#39;</span><span class="p">:</span> <span class="p">{</span> <span class="s1">&#39;state&#39;</span><span class="p">:</span> <span class="p">{</span>
<span class="s1">&#39;has_tv&#39;</span><span class="p">:</span> <span class="kc">False</span> <span class="s1">&#39;has_tv&#39;</span><span class="p">:</span> <span class="kc">False</span>
<span class="p">}</span> <span class="p">}</span>
<span class="p">},</span> <span class="p">},</span>
<span class="p">{</span> <span class="p">{</span>
<span class="s1">&#39;agent_type&#39;</span><span class="p">:</span> <span class="n">NewsSpread</span><span class="p">,</span> <span class="s1">&#39;agent_class&#39;</span><span class="p">:</span> <span class="n">NewsSpread</span><span class="p">,</span>
<span class="s1">&#39;weight&#39;</span><span class="p">:</span> <span class="mi">2</span><span class="p">,</span> <span class="s1">&#39;weight&#39;</span><span class="p">:</span> <span class="mi">2</span><span class="p">,</span>
<span class="s1">&#39;state&#39;</span><span class="p">:</span> <span class="p">{</span> <span class="s1">&#39;state&#39;</span><span class="p">:</span> <span class="p">{</span>
<span class="s1">&#39;has_tv&#39;</span><span class="p">:</span> <span class="kc">True</span> <span class="s1">&#39;has_tv&#39;</span><span class="p">:</span> <span class="kc">True</span>
@@ -12483,7 +12483,7 @@ For this demo, we will use a python dictionary:</p>
<span class="p">}</span> <span class="p">}</span>
<span class="p">],</span> <span class="p">],</span>
<span class="s1">&#39;environment_agents&#39;</span><span class="p">:[</span> <span class="s1">&#39;environment_agents&#39;</span><span class="p">:[</span>
<span class="p">{</span><span class="s1">&#39;agent_type&#39;</span><span class="p">:</span> <span class="n">NewsEnvironmentAgent</span><span class="p">,</span> <span class="p">{</span><span class="s1">&#39;agent_class&#39;</span><span class="p">:</span> <span class="n">NewsEnvironmentAgent</span><span class="p">,</span>
<span class="s1">&#39;state&#39;</span><span class="p">:</span> <span class="p">{</span> <span class="s1">&#39;state&#39;</span><span class="p">:</span> <span class="p">{</span>
<span class="s1">&#39;event_time&#39;</span><span class="p">:</span> <span class="mi">10</span> <span class="s1">&#39;event_time&#39;</span><span class="p">:</span> <span class="mi">10</span>
<span class="p">}</span> <span class="p">}</span>

View File

@@ -459,11 +459,11 @@
"sim = soil.Simulation(topology=G,\n", "sim = soil.Simulation(topology=G,\n",
" num_trials=1,\n", " num_trials=1,\n",
" max_time=MAX_TIME,\n", " max_time=MAX_TIME,\n",
" environment_agents=[{'agent_type': NewsEnvironmentAgent,\n", " environment_agents=[{'agent_class': NewsEnvironmentAgent,\n",
" 'state': {\n", " 'state': {\n",
" 'event_time': EVENT_TIME\n", " 'event_time': EVENT_TIME\n",
" }}],\n", " }}],\n",
" network_agents=[{'agent_type': NewsSpread,\n", " network_agents=[{'agent_class': NewsSpread,\n",
" 'weight': 1}],\n", " 'weight': 1}],\n",
" states={0: {'has_tv': True}},\n", " states={0: {'has_tv': True}},\n",
" default_state={'has_tv': False},\n", " default_state={'has_tv': False},\n",
@@ -588,14 +588,14 @@
" },\n", " },\n",
" 'network_agents': [\n", " 'network_agents': [\n",
" {\n", " {\n",
" 'agent_type': NewsSpread,\n", " 'agent_class': NewsSpread,\n",
" 'weight': 1,\n", " 'weight': 1,\n",
" 'state': {\n", " 'state': {\n",
" 'has_tv': False\n", " 'has_tv': False\n",
" }\n", " }\n",
" },\n", " },\n",
" {\n", " {\n",
" 'agent_type': NewsSpread,\n", " 'agent_class': NewsSpread,\n",
" 'weight': 2,\n", " 'weight': 2,\n",
" 'state': {\n", " 'state': {\n",
" 'has_tv': True\n", " 'has_tv': True\n",
@@ -603,7 +603,7 @@
" }\n", " }\n",
" ],\n", " ],\n",
" 'environment_agents':[\n", " 'environment_agents':[\n",
" {'agent_type': NewsEnvironmentAgent,\n", " {'agent_class': NewsEnvironmentAgent,\n",
" 'state': {\n", " 'state': {\n",
" 'event_time': 10\n", " 'event_time': 10\n",
" }\n", " }\n",

View File

@@ -2,8 +2,9 @@ networkx>=2.5
numpy numpy
matplotlib matplotlib
pyyaml>=5.1 pyyaml>=5.1
pandas>=0.23 pandas>=1
SALib>=1.3 SALib>=1.3
Jinja2 Jinja2
Mesa>=0.8 Mesa>=1.1
tsih>=0.1.5 pydantic>=1.9
sqlalchemy>=1.4

View File

@@ -49,9 +49,10 @@ setup(
extras_require=extras_require, extras_require=extras_require,
tests_require=test_reqs, tests_require=test_reqs,
setup_requires=['pytest-runner', ], setup_requires=['pytest-runner', ],
pytest_plugins = ['pytest_profiling'],
include_package_data=True, include_package_data=True,
entry_points={ entry_points={
'console_scripts': 'console_scripts':
['soil = soil.__init__:main', ['soil = soil.__main__:main',
'soil-web = soil.web.__init__:main'] 'soil-web = soil.web.__init__:main']
}) })

View File

@@ -1 +1 @@
0.20.1 0.30.0rc3

View File

@@ -1,8 +1,11 @@
from __future__ import annotations
import importlib import importlib
import sys import sys
import os import os
import pdb
import logging import logging
import traceback
from contextlib import contextmanager
from .version import __version__ from .version import __version__
@@ -14,82 +17,235 @@ except NameError:
from .agents import * from .agents import *
from . import agents from . import agents
from .simulation import * from .simulation import *
from .environment import Environment from .environment import Environment, EventedEnvironment
from . import serialization from . import serialization
from . import analysis
from .utils import logger from .utils import logger
from .time import * from .time import *
def main():
def main(
cfg="simulation.yml",
exporters=None,
parallel=None,
output="soil_output",
*,
do_run=False,
debug=False,
pdb=False,
**kwargs,
):
if isinstance(cfg, Simulation):
sim = cfg
import argparse import argparse
from . import simulation from . import simulation
logger.info('Running SOIL version: {}'.format(__version__)) logger.info("Running SOIL version: {}".format(__version__))
parser = argparse.ArgumentParser(description='Run a SOIL simulation') parser = argparse.ArgumentParser(description="Run a SOIL simulation")
parser.add_argument('file', type=str, parser.add_argument(
nargs="?", "file",
default='simulation.yml', type=str,
help='Configuration file for the simulation (e.g., YAML or JSON)') nargs="?",
parser.add_argument('--version', action='store_true', default=cfg if sim is None else "",
help='Show version info and exit') help="Configuration file for the simulation (e.g., YAML or JSON)",
parser.add_argument('--module', '-m', type=str, )
help='file containing the code of any custom agents.') parser.add_argument(
parser.add_argument('--dry-run', '--dry', action='store_true', "--version", action="store_true", help="Show version info and exit"
help='Do not store the results of the simulation.') )
parser.add_argument('--pdb', action='store_true', parser.add_argument(
help='Use a pdb console in case of exception.') "--module",
parser.add_argument('--graph', '-g', action='store_true', "-m",
help='Dump GEXF graph. Defaults to false.') type=str,
parser.add_argument('--csv', action='store_true', help="file containing the code of any custom agents.",
help='Dump history in CSV format. Defaults to false.') )
parser.add_argument('--level', type=str, parser.add_argument(
help='Logging level') "--dry-run",
parser.add_argument('--output', '-o', type=str, default="soil_output", "--dry",
help='folder to write results to. It defaults to the current directory.') action="store_true",
parser.add_argument('--synchronous', action='store_true', help="Do not store the results of the simulation to disk, show in terminal instead.",
help='Run trials serially and synchronously instead of in parallel. Defaults to false.') )
parser.add_argument('-e', '--exporter', action='append', parser.add_argument(
help='Export environment and/or simulations using this exporter') "--pdb", action="store_true", help="Use a pdb console in case of exception."
)
parser.add_argument(
"--debug",
action="store_true",
help="Run a customized version of a pdb console to debug a simulation.",
)
parser.add_argument(
"--graph",
"-g",
action="store_true",
help="Dump each trial's network topology as a GEXF graph. Defaults to false.",
)
parser.add_argument(
"--csv",
action="store_true",
help="Dump all data collected in CSV format. Defaults to false.",
)
parser.add_argument("--level", type=str, help="Logging level")
parser.add_argument(
"--output",
"-o",
type=str,
default=output or "soil_output",
help="folder to write results to. It defaults to the current directory.",
)
if parallel is None:
parser.add_argument(
"--synchronous",
action="store_true",
help="Run trials serially and synchronously instead of in parallel. Defaults to false.",
)
parser.add_argument(
"-e",
"--exporter",
action="append",
default=[],
help="Export environment and/or simulations using this exporter",
)
parser.add_argument(
"--only-convert",
"--convert",
action="store_true",
help="Do not run the simulation, only convert the configuration file(s) and output them.",
)
parser.add_argument(
"--set",
metavar="KEY=VALUE",
action="append",
help="Set a number of parameters that will be passed to the simulation."
"(do not put spaces before or after the = sign). "
"If a value contains spaces, you should define "
"it with double quotes: "
'foo="this is a sentence". Note that '
"values are always treated as strings.",
)
args = parser.parse_args() args = parser.parse_args()
logging.basicConfig(level=getattr(logging, (args.level or 'INFO').upper())) logger.setLevel(getattr(logging, (args.level or "INFO").upper()))
if args.version: if args.version:
return return
if parallel is None:
parallel = not args.synchronous
exporters = exporters or [
"default",
]
for exp in args.exporter:
if exp not in exporters:
exporters.append(exp)
if args.csv:
exporters.append("csv")
if args.graph:
exporters.append("gexf")
if os.getcwd() not in sys.path: if os.getcwd() not in sys.path:
sys.path.append(os.getcwd()) sys.path.append(os.getcwd())
if args.module: if args.module:
importlib.import_module(args.module) importlib.import_module(args.module)
if output is None:
output = args.output
logger.info('Loading config file: {}'.format(args.file)) debug = debug or args.debug
if args.pdb or debug:
args.synchronous = True
os.environ["SOIL_POSTMORTEM"] = "true"
res = []
try: try:
exporters = list(args.exporter or ['default', ])
if args.csv:
exporters.append('csv')
if args.graph:
exporters.append('gexf')
exp_params = {} exp_params = {}
if args.dry_run:
exp_params['copy_to'] = sys.stdout
if not os.path.exists(args.file): if sim:
logger.error('Please, input a valid file') logger.info("Loading simulation instance")
return sim.dry_run = args.dry_run
simulation.run_from_config(args.file, sim.exporters = exporters
dry_run=args.dry_run, sim.parallel = parallel
exporters=exporters, sim.outdir = output
parallel=(not args.synchronous), sims = [
outdir=args.output, sim,
exporter_params=exp_params) ]
except Exception: else:
logger.info("Loading config file: {}".format(args.file))
if not os.path.exists(args.file):
logger.error("Please, input a valid file")
return
sims = list(
simulation.iter_from_config(
args.file,
dry_run=args.dry_run,
exporters=exporters,
parallel=parallel,
outdir=output,
exporter_params=exp_params,
**kwargs,
)
)
for sim in sims:
if args.set:
for s in args.set:
k, v = s.split("=", 1)[:2]
v = eval(v)
tail, *head = k.rsplit(".", 1)[::-1]
target = sim
if head:
for part in head[0].split("."):
try:
target = getattr(target, part)
except AttributeError:
target = target[part]
try:
setattr(target, tail, v)
except AttributeError:
target[tail] = v
if args.only_convert:
print(sim.to_yaml())
continue
if do_run:
res.append(sim.run())
else:
print("not running")
res.append(sim)
except Exception as ex:
if args.pdb: if args.pdb:
pdb.post_mortem() from .debugging import post_mortem
print(traceback.format_exc())
post_mortem()
else: else:
raise raise
if debug:
from .debugging import set_trace
os.environ["SOIL_DEBUG"] = "true"
set_trace()
return res
if __name__ == '__main__': @contextmanager
main() def easy(cfg, pdb=False, debug=False, **kwargs):
try:
yield main(cfg, debug=debug, pdb=pdb, **kwargs)[0]
except Exception as e:
if os.environ.get("SOIL_POSTMORTEM"):
from .debugging import post_mortem
print(traceback.format_exc())
post_mortem()
raise
if __name__ == "__main__":
main(do_run=True)

View File

@@ -1,4 +1,9 @@
from . import main from . import main as init_main
if __name__ == '__main__':
main() def main():
init_main(do_run=True)
if __name__ == "__main__":
init_main(do_run=True)

View File

@@ -1,4 +1,3 @@
import random
from . import FSM, state, default_state from . import FSM, state, default_state
@@ -8,6 +7,7 @@ class BassModel(FSM):
innovation_prob innovation_prob
imitation_prob imitation_prob
""" """
sentimentCorrelation = 0 sentimentCorrelation = 0
def step(self): def step(self):
@@ -16,13 +16,13 @@ class BassModel(FSM):
@default_state @default_state
@state @state
def innovation(self): def innovation(self):
if random.random() < self.innovation_prob: if self.prob(self.innovation_prob):
self.sentimentCorrelation = 1 self.sentimentCorrelation = 1
return self.aware return self.aware
else: else:
aware_neighbors = self.get_neighboring_agents(state_id=self.aware.id) aware_neighbors = self.get_neighbors(state_id=self.aware.id)
num_neighbors_aware = len(aware_neighbors) num_neighbors_aware = len(aware_neighbors)
if random.random() < (self['imitation_prob']*num_neighbors_aware): if self.prob((self["imitation_prob"] * num_neighbors_aware)):
self.sentimentCorrelation = 1 self.sentimentCorrelation = 1
return self.aware return self.aware

View File

@@ -1,4 +1,3 @@
import random
from . import FSM, state, default_state from . import FSM, state, default_state
@@ -7,42 +6,54 @@ class BigMarketModel(FSM):
Settings: Settings:
Names: Names:
enterprises [Array] enterprises [Array]
tweet_probability_enterprises [Array] tweet_probability_enterprises [Array]
Users: Users:
tweet_probability_users tweet_probability_users
tweet_relevant_probability tweet_relevant_probability
tweet_probability_about [Array] tweet_probability_about [Array]
sentiment_about [Array] sentiment_about [Array]
""" """
def __init__(self, *args, **kwargs): def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs) super().__init__(*args, **kwargs)
self.enterprises = self.env.environment_params['enterprises'] self.enterprises = self.env.environment_params["enterprises"]
self.type = "" self.type = ""
if self.id < len(self.enterprises): # Enterprises if self.id < len(self.enterprises): # Enterprises
self.set_state(self.enterprise.id) self._set_state(self.enterprise.id)
self.type = "Enterprise" self.type = "Enterprise"
self.tweet_probability = environment.environment_params['tweet_probability_enterprises'][self.id] self.tweet_probability = environment.environment_params[
"tweet_probability_enterprises"
][self.id]
else: # normal users else: # normal users
self.type = "User" self.type = "User"
self.set_state(self.user.id) self._set_state(self.user.id)
self.tweet_probability = environment.environment_params['tweet_probability_users'] self.tweet_probability = environment.environment_params[
self.tweet_relevant_probability = environment.environment_params['tweet_relevant_probability'] "tweet_probability_users"
self.tweet_probability_about = environment.environment_params['tweet_probability_about'] # List ]
self.sentiment_about = environment.environment_params['sentiment_about'] # List self.tweet_relevant_probability = environment.environment_params[
"tweet_relevant_probability"
]
self.tweet_probability_about = environment.environment_params[
"tweet_probability_about"
] # List
self.sentiment_about = environment.environment_params[
"sentiment_about"
] # List
@state @state
def enterprise(self): def enterprise(self):
if random.random() < self.tweet_probability: # Tweets if self.random.random() < self.tweet_probability: # Tweets
aware_neighbors = self.get_neighboring_agents(state_id=self.number_of_enterprises) # Nodes neighbour users aware_neighbors = self.get_neighbors(
state_id=self.number_of_enterprises
) # Nodes neighbour users
for x in aware_neighbors: for x in aware_neighbors:
if random.uniform(0,10) < 5: if self.random.uniform(0, 10) < 5:
x.sentiment_about[self.id] += 0.1 # Increments for enterprise x.sentiment_about[self.id] += 0.1 # Increments for enterprise
else: else:
x.sentiment_about[self.id] -= 0.1 # Decrements for enterprise x.sentiment_about[self.id] -= 0.1 # Decrements for enterprise
@@ -50,39 +61,49 @@ class BigMarketModel(FSM):
# Establecemos limites # Establecemos limites
if x.sentiment_about[self.id] > 1: if x.sentiment_about[self.id] > 1:
x.sentiment_about[self.id] = 1 x.sentiment_about[self.id] = 1
if x.sentiment_about[self.id]< -1: if x.sentiment_about[self.id] < -1:
x.sentiment_about[self.id] = -1 x.sentiment_about[self.id] = -1
x.attrs['sentiment_enterprise_%s'% self.enterprises[self.id]] = x.sentiment_about[self.id] x.attrs[
"sentiment_enterprise_%s" % self.enterprises[self.id]
] = x.sentiment_about[self.id]
@state @state
def user(self): def user(self):
if random.random() < self.tweet_probability: # Tweets if self.random.random() < self.tweet_probability: # Tweets
if random.random() < self.tweet_relevant_probability: # Tweets something relevant if (
self.random.random() < self.tweet_relevant_probability
): # Tweets something relevant
# Tweet probability per enterprise # Tweet probability per enterprise
for i in range(len(self.enterprises)): for i in range(len(self.enterprises)):
random_num = random.random() random_num = self.random.random()
if random_num < self.tweet_probability_about[i]: if random_num < self.tweet_probability_about[i]:
# The condition is fulfilled, sentiments are evaluated towards that enterprise # The condition is fulfilled, sentiments are evaluated towards that enterprise
if self.sentiment_about[i] < 0: if self.sentiment_about[i] < 0:
# NEGATIVO # NEGATIVO
self.userTweets("negative",i) self.userTweets("negative", i)
elif self.sentiment_about[i] == 0: elif self.sentiment_about[i] == 0:
# NEUTRO # NEUTRO
pass pass
else: else:
# POSITIVO # POSITIVO
self.userTweets("positive",i) self.userTweets("positive", i)
for i in range(len(self.enterprises)): # So that it never is set to 0 if there are not changes (logs) for i in range(
self.attrs['sentiment_enterprise_%s'% self.enterprises[i]] = self.sentiment_about[i] len(self.enterprises)
): # So that it never is set to 0 if there are not changes (logs)
self.attrs[
"sentiment_enterprise_%s" % self.enterprises[i]
] = self.sentiment_about[i]
def userTweets(self, sentiment,enterprise): def userTweets(self, sentiment, enterprise):
aware_neighbors = self.get_neighboring_agents(state_id=self.number_of_enterprises) # Nodes neighbours users aware_neighbors = self.get_neighbors(
state_id=self.number_of_enterprises
) # Nodes neighbours users
for x in aware_neighbors: for x in aware_neighbors:
if sentiment == "positive": if sentiment == "positive":
x.sentiment_about[enterprise] +=0.003 x.sentiment_about[enterprise] += 0.003
elif sentiment == "negative": elif sentiment == "negative":
x.sentiment_about[enterprise] -=0.003 x.sentiment_about[enterprise] -= 0.003
else: else:
pass pass
@@ -92,4 +113,6 @@ class BigMarketModel(FSM):
if x.sentiment_about[enterprise] < -1: if x.sentiment_about[enterprise] < -1:
x.sentiment_about[enterprise] = -1 x.sentiment_about[enterprise] = -1
x.attrs['sentiment_enterprise_%s'% self.enterprises[enterprise]] = x.sentiment_about[enterprise] x.attrs[
"sentiment_enterprise_%s" % self.enterprises[enterprise]
] = x.sentiment_about[enterprise]

View File

@@ -7,13 +7,17 @@ class CounterModel(NetworkAgent):
in each step and adds it to its state. in each step and adds it to its state.
""" """
times = 0
neighbors = 0
total = 0
def step(self): def step(self):
# Outside effects # Outside effects
total = len(list(self.get_agents())) total = len(list(self.model.schedule._agents))
neighbors = len(list(self.get_neighboring_agents())) neighbors = len(list(self.get_neighbors()))
self['times'] = self.get('times', 0) + 1 self["times"] = self.get("times", 0) + 1
self['neighbors'] = neighbors self["neighbors"] = neighbors
self['total'] = total self["total"] = total
class AggregatedCounter(NetworkAgent): class AggregatedCounter(NetworkAgent):
@@ -22,17 +26,15 @@ class AggregatedCounter(NetworkAgent):
in each step and adds it to its state. in each step and adds it to its state.
""" """
defaults = { times = 0
'times': 0, neighbors = 0
'neighbors': 0, total = 0
'total': 0
}
def step(self): def step(self):
# Outside effects # Outside effects
self['times'] += 1 self["times"] += 1
neighbors = len(list(self.get_neighboring_agents())) neighbors = len(list(self.get_neighbors()))
self['neighbors'] += neighbors self["neighbors"] += neighbors
total = len(list(self.get_agents())) total = len(list(self.model.schedule.agents))
self['total'] += total self["total"] += total
self.debug('Running for step: {}. Total: {}'.format(self.now, total)) self.debug("Running for step: {}. Total: {}".format(self.now, total))

View File

@@ -2,20 +2,20 @@ from scipy.spatial import cKDTree as KDTree
import networkx as nx import networkx as nx
from . import NetworkAgent, as_node from . import NetworkAgent, as_node
class Geo(NetworkAgent): class Geo(NetworkAgent):
'''In this type of network, nodes have a "pos" attribute.''' """In this type of network, nodes have a "pos" attribute."""
def geo_search(self, radius, node=None, center=False, **kwargs): def geo_search(self, radius, node=None, center=False, **kwargs):
'''Get a list of nodes whose coordinates are closer than *radius* to *node*.''' """Get a list of nodes whose coordinates are closer than *radius* to *node*."""
node = as_node(node if node is not None else self) node = as_node(node if node is not None else self)
G = self.subgraph(**kwargs) G = self.subgraph(**kwargs)
pos = nx.get_node_attributes(G, 'pos') pos = nx.get_node_attributes(G, "pos")
if not pos: if not pos:
return [] return []
nodes, coords = list(zip(*pos.items())) nodes, coords = list(zip(*pos.items()))
kdtree = KDTree(coords) # Cannot provide generator. kdtree = KDTree(coords) # Cannot provide generator.
indices = kdtree.query_ball_point(pos[node], radius) indices = kdtree.query_ball_point(pos[node], radius)
return [nodes[i] for i in indices if center or (nodes[i] != node)] return [nodes[i] for i in indices if center or (nodes[i] != node)]

View File

@@ -1,4 +1,3 @@
import random
from . import BaseAgent from . import BaseAgent
@@ -12,10 +11,10 @@ class IndependentCascadeModel(BaseAgent):
def __init__(self, *args, **kwargs): def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs) super().__init__(*args, **kwargs)
self.innovation_prob = self.env.environment_params['innovation_prob'] self.innovation_prob = self.env.environment_params["innovation_prob"]
self.imitation_prob = self.env.environment_params['imitation_prob'] self.imitation_prob = self.env.environment_params["imitation_prob"]
self.state['time_awareness'] = 0 self.state["time_awareness"] = 0
self.state['sentimentCorrelation'] = 0 self.state["sentimentCorrelation"] = 0
def step(self): def step(self):
self.behaviour() self.behaviour()
@@ -23,26 +22,28 @@ class IndependentCascadeModel(BaseAgent):
def behaviour(self): def behaviour(self):
aware_neighbors_1_time_step = [] aware_neighbors_1_time_step = []
# Outside effects # Outside effects
if random.random() < self.innovation_prob: if self.prob(self.innovation_prob):
if self.state['id'] == 0: if self.state["id"] == 0:
self.state['id'] = 1 self.state["id"] = 1
self.state['sentimentCorrelation'] = 1 self.state["sentimentCorrelation"] = 1
self.state['time_awareness'] = self.env.now # To know when they have been infected self.state[
"time_awareness"
] = self.env.now # To know when they have been infected
else: else:
pass pass
return return
# Imitation effects # Imitation effects
if self.state['id'] == 0: if self.state["id"] == 0:
aware_neighbors = self.get_neighboring_agents(state_id=1) aware_neighbors = self.get_neighbors(state_id=1)
for x in aware_neighbors: for x in aware_neighbors:
if x.state['time_awareness'] == (self.env.now-1): if x.state["time_awareness"] == (self.env.now - 1):
aware_neighbors_1_time_step.append(x) aware_neighbors_1_time_step.append(x)
num_neighbors_aware = len(aware_neighbors_1_time_step) num_neighbors_aware = len(aware_neighbors_1_time_step)
if random.random() < (self.imitation_prob*num_neighbors_aware): if self.prob(self.imitation_prob * num_neighbors_aware):
self.state['id'] = 1 self.state["id"] = 1
self.state['sentimentCorrelation'] = 1 self.state["sentimentCorrelation"] = 1
else: else:
pass pass

View File

@@ -1,4 +1,3 @@
import random
import numpy as np import numpy as np
from . import BaseAgent from . import BaseAgent
@@ -24,84 +23,100 @@ class SpreadModelM2(BaseAgent):
def __init__(self, model=None, unique_id=0, state=()): def __init__(self, model=None, unique_id=0, state=()):
super().__init__(model=environment, unique_id=unique_id, state=state) super().__init__(model=environment, unique_id=unique_id, state=state)
self.prob_neutral_making_denier = np.random.normal(environment.environment_params['prob_neutral_making_denier'], # Use a single generator with the same seed as `self.random`
environment.environment_params['standard_variance']) random = np.random.default_rng(seed=self._seed)
self.prob_neutral_making_denier = random.normal(
environment.environment_params["prob_neutral_making_denier"],
environment.environment_params["standard_variance"],
)
self.prob_infect = np.random.normal(environment.environment_params['prob_infect'], self.prob_infect = random.normal(
environment.environment_params['standard_variance']) environment.environment_params["prob_infect"],
environment.environment_params["standard_variance"],
)
self.prob_cured_healing_infected = np.random.normal(environment.environment_params['prob_cured_healing_infected'], self.prob_cured_healing_infected = random.normal(
environment.environment_params['standard_variance']) environment.environment_params["prob_cured_healing_infected"],
self.prob_cured_vaccinate_neutral = np.random.normal(environment.environment_params['prob_cured_vaccinate_neutral'], environment.environment_params["standard_variance"],
environment.environment_params['standard_variance']) )
self.prob_cured_vaccinate_neutral = random.normal(
environment.environment_params["prob_cured_vaccinate_neutral"],
environment.environment_params["standard_variance"],
)
self.prob_vaccinated_healing_infected = np.random.normal(environment.environment_params['prob_vaccinated_healing_infected'], self.prob_vaccinated_healing_infected = random.normal(
environment.environment_params['standard_variance']) environment.environment_params["prob_vaccinated_healing_infected"],
self.prob_vaccinated_vaccinate_neutral = np.random.normal(environment.environment_params['prob_vaccinated_vaccinate_neutral'], environment.environment_params["standard_variance"],
environment.environment_params['standard_variance']) )
self.prob_generate_anti_rumor = np.random.normal(environment.environment_params['prob_generate_anti_rumor'], self.prob_vaccinated_vaccinate_neutral = random.normal(
environment.environment_params['standard_variance']) environment.environment_params["prob_vaccinated_vaccinate_neutral"],
environment.environment_params["standard_variance"],
)
self.prob_generate_anti_rumor = random.normal(
environment.environment_params["prob_generate_anti_rumor"],
environment.environment_params["standard_variance"],
)
def step(self): def step(self):
if self.state['id'] == 0: # Neutral if self.state["id"] == 0: # Neutral
self.neutral_behaviour() self.neutral_behaviour()
elif self.state['id'] == 1: # Infected elif self.state["id"] == 1: # Infected
self.infected_behaviour() self.infected_behaviour()
elif self.state['id'] == 2: # Cured elif self.state["id"] == 2: # Cured
self.cured_behaviour() self.cured_behaviour()
elif self.state['id'] == 3: # Vaccinated elif self.state["id"] == 3: # Vaccinated
self.vaccinated_behaviour() self.vaccinated_behaviour()
def neutral_behaviour(self): def neutral_behaviour(self):
# Infected # Infected
infected_neighbors = self.get_neighboring_agents(state_id=1) infected_neighbors = self.get_neighbors(state_id=1)
if len(infected_neighbors) > 0: if len(infected_neighbors) > 0:
if random.random() < self.prob_neutral_making_denier: if self.prob(self.prob_neutral_making_denier):
self.state['id'] = 3 # Vaccinated making denier self.state["id"] = 3 # Vaccinated making denier
def infected_behaviour(self): def infected_behaviour(self):
# Neutral # Neutral
neutral_neighbors = self.get_neighboring_agents(state_id=0) neutral_neighbors = self.get_neighbors(state_id=0)
for neighbor in neutral_neighbors: for neighbor in neutral_neighbors:
if random.random() < self.prob_infect: if self.prob(self.prob_infect):
neighbor.state['id'] = 1 # Infected neighbor.state["id"] = 1 # Infected
def cured_behaviour(self): def cured_behaviour(self):
# Vaccinate # Vaccinate
neutral_neighbors = self.get_neighboring_agents(state_id=0) neutral_neighbors = self.get_neighbors(state_id=0)
for neighbor in neutral_neighbors: for neighbor in neutral_neighbors:
if random.random() < self.prob_cured_vaccinate_neutral: if self.prob(self.prob_cured_vaccinate_neutral):
neighbor.state['id'] = 3 # Vaccinated neighbor.state["id"] = 3 # Vaccinated
# Cure # Cure
infected_neighbors = self.get_neighboring_agents(state_id=1) infected_neighbors = self.get_neighbors(state_id=1)
for neighbor in infected_neighbors: for neighbor in infected_neighbors:
if random.random() < self.prob_cured_healing_infected: if self.prob(self.prob_cured_healing_infected):
neighbor.state['id'] = 2 # Cured neighbor.state["id"] = 2 # Cured
def vaccinated_behaviour(self): def vaccinated_behaviour(self):
# Cure # Cure
infected_neighbors = self.get_neighboring_agents(state_id=1) infected_neighbors = self.get_neighbors(state_id=1)
for neighbor in infected_neighbors: for neighbor in infected_neighbors:
if random.random() < self.prob_cured_healing_infected: if self.prob(self.prob_cured_healing_infected):
neighbor.state['id'] = 2 # Cured neighbor.state["id"] = 2 # Cured
# Vaccinate # Vaccinate
neutral_neighbors = self.get_neighboring_agents(state_id=0) neutral_neighbors = self.get_neighbors(state_id=0)
for neighbor in neutral_neighbors: for neighbor in neutral_neighbors:
if random.random() < self.prob_cured_vaccinate_neutral: if self.prob(self.prob_cured_vaccinate_neutral):
neighbor.state['id'] = 3 # Vaccinated neighbor.state["id"] = 3 # Vaccinated
# Generate anti-rumor # Generate anti-rumor
infected_neighbors_2 = self.get_neighboring_agents(state_id=1) infected_neighbors_2 = self.get_neighbors(state_id=1)
for neighbor in infected_neighbors_2: for neighbor in infected_neighbors_2:
if random.random() < self.prob_generate_anti_rumor: if self.prob(self.prob_generate_anti_rumor):
neighbor.state['id'] = 2 # Cured neighbor.state["id"] = 2 # Cured
class ControlModelM2(BaseAgent): class ControlModelM2(BaseAgent):
@@ -110,133 +125,146 @@ class ControlModelM2(BaseAgent):
prob_neutral_making_denier prob_neutral_making_denier
prob_infect prob_infect
prob_cured_healing_infected prob_cured_healing_infected
prob_cured_vaccinate_neutral prob_cured_vaccinate_neutral
prob_vaccinated_healing_infected prob_vaccinated_healing_infected
prob_vaccinated_vaccinate_neutral prob_vaccinated_vaccinate_neutral
prob_generate_anti_rumor prob_generate_anti_rumor
""" """
def __init__(self, model=None, unique_id=0, state=()): def __init__(self, model=None, unique_id=0, state=()):
super().__init__(model=environment, unique_id=unique_id, state=state) super().__init__(model=environment, unique_id=unique_id, state=state)
self.prob_neutral_making_denier = np.random.normal(environment.environment_params['prob_neutral_making_denier'], self.prob_neutral_making_denier = np.random.normal(
environment.environment_params['standard_variance']) environment.environment_params["prob_neutral_making_denier"],
environment.environment_params["standard_variance"],
)
self.prob_infect = np.random.normal(environment.environment_params['prob_infect'], self.prob_infect = np.random.normal(
environment.environment_params['standard_variance']) environment.environment_params["prob_infect"],
environment.environment_params["standard_variance"],
)
self.prob_cured_healing_infected = np.random.normal(environment.environment_params['prob_cured_healing_infected'], self.prob_cured_healing_infected = np.random.normal(
environment.environment_params['standard_variance']) environment.environment_params["prob_cured_healing_infected"],
self.prob_cured_vaccinate_neutral = np.random.normal(environment.environment_params['prob_cured_vaccinate_neutral'], environment.environment_params["standard_variance"],
environment.environment_params['standard_variance']) )
self.prob_cured_vaccinate_neutral = np.random.normal(
environment.environment_params["prob_cured_vaccinate_neutral"],
environment.environment_params["standard_variance"],
)
self.prob_vaccinated_healing_infected = np.random.normal(environment.environment_params['prob_vaccinated_healing_infected'], self.prob_vaccinated_healing_infected = np.random.normal(
environment.environment_params['standard_variance']) environment.environment_params["prob_vaccinated_healing_infected"],
self.prob_vaccinated_vaccinate_neutral = np.random.normal(environment.environment_params['prob_vaccinated_vaccinate_neutral'], environment.environment_params["standard_variance"],
environment.environment_params['standard_variance']) )
self.prob_generate_anti_rumor = np.random.normal(environment.environment_params['prob_generate_anti_rumor'], self.prob_vaccinated_vaccinate_neutral = np.random.normal(
environment.environment_params['standard_variance']) environment.environment_params["prob_vaccinated_vaccinate_neutral"],
environment.environment_params["standard_variance"],
)
self.prob_generate_anti_rumor = np.random.normal(
environment.environment_params["prob_generate_anti_rumor"],
environment.environment_params["standard_variance"],
)
def step(self): def step(self):
if self.state['id'] == 0: # Neutral if self.state["id"] == 0: # Neutral
self.neutral_behaviour() self.neutral_behaviour()
elif self.state['id'] == 1: # Infected elif self.state["id"] == 1: # Infected
self.infected_behaviour() self.infected_behaviour()
elif self.state['id'] == 2: # Cured elif self.state["id"] == 2: # Cured
self.cured_behaviour() self.cured_behaviour()
elif self.state['id'] == 3: # Vaccinated elif self.state["id"] == 3: # Vaccinated
self.vaccinated_behaviour() self.vaccinated_behaviour()
elif self.state['id'] == 4: # Beacon-off elif self.state["id"] == 4: # Beacon-off
self.beacon_off_behaviour() self.beacon_off_behaviour()
elif self.state['id'] == 5: # Beacon-on elif self.state["id"] == 5: # Beacon-on
self.beacon_on_behaviour() self.beacon_on_behaviour()
def neutral_behaviour(self): def neutral_behaviour(self):
self.state['visible'] = False self.state["visible"] = False
# Infected # Infected
infected_neighbors = self.get_neighboring_agents(state_id=1) infected_neighbors = self.get_neighbors(state_id=1)
if len(infected_neighbors) > 0: if len(infected_neighbors) > 0:
if random.random() < self.prob_neutral_making_denier: if self.random(self.prob_neutral_making_denier):
self.state['id'] = 3 # Vaccinated making denier self.state["id"] = 3 # Vaccinated making denier
def infected_behaviour(self): def infected_behaviour(self):
# Neutral # Neutral
neutral_neighbors = self.get_neighboring_agents(state_id=0) neutral_neighbors = self.get_neighbors(state_id=0)
for neighbor in neutral_neighbors: for neighbor in neutral_neighbors:
if random.random() < self.prob_infect: if self.prob(self.prob_infect):
neighbor.state['id'] = 1 # Infected neighbor.state["id"] = 1 # Infected
self.state['visible'] = False self.state["visible"] = False
def cured_behaviour(self): def cured_behaviour(self):
self.state['visible'] = True self.state["visible"] = True
# Vaccinate # Vaccinate
neutral_neighbors = self.get_neighboring_agents(state_id=0) neutral_neighbors = self.get_neighbors(state_id=0)
for neighbor in neutral_neighbors: for neighbor in neutral_neighbors:
if random.random() < self.prob_cured_vaccinate_neutral: if self.prob(self.prob_cured_vaccinate_neutral):
neighbor.state['id'] = 3 # Vaccinated neighbor.state["id"] = 3 # Vaccinated
# Cure # Cure
infected_neighbors = self.get_neighboring_agents(state_id=1) infected_neighbors = self.get_neighbors(state_id=1)
for neighbor in infected_neighbors: for neighbor in infected_neighbors:
if random.random() < self.prob_cured_healing_infected: if self.prob(self.prob_cured_healing_infected):
neighbor.state['id'] = 2 # Cured neighbor.state["id"] = 2 # Cured
def vaccinated_behaviour(self): def vaccinated_behaviour(self):
self.state['visible'] = True self.state["visible"] = True
# Cure # Cure
infected_neighbors = self.get_neighboring_agents(state_id=1) infected_neighbors = self.get_neighbors(state_id=1)
for neighbor in infected_neighbors: for neighbor in infected_neighbors:
if random.random() < self.prob_cured_healing_infected: if self.prob(self.prob_cured_healing_infected):
neighbor.state['id'] = 2 # Cured neighbor.state["id"] = 2 # Cured
# Vaccinate # Vaccinate
neutral_neighbors = self.get_neighboring_agents(state_id=0) neutral_neighbors = self.get_neighbors(state_id=0)
for neighbor in neutral_neighbors: for neighbor in neutral_neighbors:
if random.random() < self.prob_cured_vaccinate_neutral: if self.prob(self.prob_cured_vaccinate_neutral):
neighbor.state['id'] = 3 # Vaccinated neighbor.state["id"] = 3 # Vaccinated
# Generate anti-rumor # Generate anti-rumor
infected_neighbors_2 = self.get_neighboring_agents(state_id=1) infected_neighbors_2 = self.get_neighbors(state_id=1)
for neighbor in infected_neighbors_2: for neighbor in infected_neighbors_2:
if random.random() < self.prob_generate_anti_rumor: if self.prob(self.prob_generate_anti_rumor):
neighbor.state['id'] = 2 # Cured neighbor.state["id"] = 2 # Cured
def beacon_off_behaviour(self): def beacon_off_behaviour(self):
self.state['visible'] = False self.state["visible"] = False
infected_neighbors = self.get_neighboring_agents(state_id=1) infected_neighbors = self.get_neighbors(state_id=1)
if len(infected_neighbors) > 0: if len(infected_neighbors) > 0:
self.state['id'] == 5 # Beacon on self.state["id"] == 5 # Beacon on
def beacon_on_behaviour(self): def beacon_on_behaviour(self):
self.state['visible'] = False self.state["visible"] = False
# Cure (M2 feature added) # Cure (M2 feature added)
infected_neighbors = self.get_neighboring_agents(state_id=1) infected_neighbors = self.get_neighbors(state_id=1)
for neighbor in infected_neighbors: for neighbor in infected_neighbors:
if random.random() < self.prob_generate_anti_rumor: if self.prob(self.prob_generate_anti_rumor):
neighbor.state['id'] = 2 # Cured neighbor.state["id"] = 2 # Cured
neutral_neighbors_infected = neighbor.get_neighboring_agents(state_id=0) neutral_neighbors_infected = neighbor.get_neighbors(state_id=0)
for neighbor in neutral_neighbors_infected: for neighbor in neutral_neighbors_infected:
if random.random() < self.prob_generate_anti_rumor: if self.prob(self.prob_generate_anti_rumor):
neighbor.state['id'] = 3 # Vaccinated neighbor.state["id"] = 3 # Vaccinated
infected_neighbors_infected = neighbor.get_neighboring_agents(state_id=1) infected_neighbors_infected = neighbor.get_neighbors(state_id=1)
for neighbor in infected_neighbors_infected: for neighbor in infected_neighbors_infected:
if random.random() < self.prob_generate_anti_rumor: if self.prob(self.prob_generate_anti_rumor):
neighbor.state['id'] = 2 # Cured neighbor.state["id"] = 2 # Cured
# Vaccinate # Vaccinate
neutral_neighbors = self.get_neighboring_agents(state_id=0) neutral_neighbors = self.get_neighbors(state_id=0)
for neighbor in neutral_neighbors: for neighbor in neutral_neighbors:
if random.random() < self.prob_cured_vaccinate_neutral: if self.prob(self.prob_cured_vaccinate_neutral):
neighbor.state['id'] = 3 # Vaccinated neighbor.state["id"] = 3 # Vaccinated

View File

@@ -1,4 +1,3 @@
import random
import numpy as np import numpy as np
from . import FSM, state from . import FSM, state
@@ -7,87 +6,97 @@ class SISaModel(FSM):
""" """
Settings: Settings:
neutral_discontent_spon_prob neutral_discontent_spon_prob
neutral_discontent_infected_prob neutral_discontent_infected_prob
neutral_content_spon_prob neutral_content_spon_prob
neutral_content_infected_prob neutral_content_infected_prob
discontent_neutral discontent_neutral
discontent_content discontent_content
variance_d_c variance_d_c
content_discontent content_discontent
variance_c_d variance_c_d
content_neutral content_neutral
standard_variance standard_variance
""" """
def __init__(self, environment, unique_id=0, state=()): def __init__(self, environment, unique_id=0, state=()):
super().__init__(model=environment, unique_id=unique_id, state=state) super().__init__(model=environment, unique_id=unique_id, state=state)
self.neutral_discontent_spon_prob = np.random.normal(self.env['neutral_discontent_spon_prob'], random = np.random.default_rng(seed=self._seed)
self.env['standard_variance'])
self.neutral_discontent_infected_prob = np.random.normal(self.env['neutral_discontent_infected_prob'],
self.env['standard_variance'])
self.neutral_content_spon_prob = np.random.normal(self.env['neutral_content_spon_prob'],
self.env['standard_variance'])
self.neutral_content_infected_prob = np.random.normal(self.env['neutral_content_infected_prob'],
self.env['standard_variance'])
self.discontent_neutral = np.random.normal(self.env['discontent_neutral'], self.neutral_discontent_spon_prob = random.normal(
self.env['standard_variance']) self.env["neutral_discontent_spon_prob"], self.env["standard_variance"]
self.discontent_content = np.random.normal(self.env['discontent_content'], )
self.env['variance_d_c']) self.neutral_discontent_infected_prob = random.normal(
self.env["neutral_discontent_infected_prob"], self.env["standard_variance"]
)
self.neutral_content_spon_prob = random.normal(
self.env["neutral_content_spon_prob"], self.env["standard_variance"]
)
self.neutral_content_infected_prob = random.normal(
self.env["neutral_content_infected_prob"], self.env["standard_variance"]
)
self.content_discontent = np.random.normal(self.env['content_discontent'], self.discontent_neutral = random.normal(
self.env['variance_c_d']) self.env["discontent_neutral"], self.env["standard_variance"]
self.content_neutral = np.random.normal(self.env['content_neutral'], )
self.env['standard_variance']) self.discontent_content = random.normal(
self.env["discontent_content"], self.env["variance_d_c"]
)
self.content_discontent = random.normal(
self.env["content_discontent"], self.env["variance_c_d"]
)
self.content_neutral = random.normal(
self.env["content_neutral"], self.env["standard_variance"]
)
@state @state
def neutral(self): def neutral(self):
# Spontaneous effects # Spontaneous effects
if random.random() < self.neutral_discontent_spon_prob: if self.prob(self.neutral_discontent_spon_prob):
return self.discontent return self.discontent
if random.random() < self.neutral_content_spon_prob: if self.prob(self.neutral_content_spon_prob):
return self.content return self.content
# Infected # Infected
discontent_neighbors = self.count_neighboring_agents(state_id=self.discontent) discontent_neighbors = self.count_neighbors(state_id=self.discontent)
if random.random() < discontent_neighbors * self.neutral_discontent_infected_prob: if self.prob(scontent_neighbors * self.neutral_discontent_infected_prob):
return self.discontent return self.discontent
content_neighbors = self.count_neighboring_agents(state_id=self.content.id) content_neighbors = self.count_neighbors(state_id=self.content.id)
if random.random() < content_neighbors * self.neutral_content_infected_prob: if self.prob(s * self.neutral_content_infected_prob):
return self.content return self.content
return self.neutral return self.neutral
@state @state
def discontent(self): def discontent(self):
# Healing # Healing
if random.random() < self.discontent_neutral: if self.prob(self.discontent_neutral):
return self.neutral return self.neutral
# Superinfected # Superinfected
content_neighbors = self.count_neighboring_agents(state_id=self.content.id) content_neighbors = self.count_neighbors(state_id=self.content.id)
if random.random() < content_neighbors * self.discontent_content: if self.prob(s * self.discontent_content):
return self.content return self.content
return self.discontent return self.discontent
@state @state
def content(self): def content(self):
# Healing # Healing
if random.random() < self.content_neutral: if self.prob(self.content_neutral):
return self.neutral return self.neutral
# Superinfected # Superinfected
discontent_neighbors = self.count_neighboring_agents(state_id=self.discontent.id) discontent_neighbors = self.count_neighbors(state_id=self.discontent.id)
if random.random() < discontent_neighbors * self.content_discontent: if self.prob(scontent_neighbors * self.content_discontent):
self.discontent self.discontent
return self.content return self.content

View File

@@ -1,4 +1,3 @@
import random
from . import BaseAgent from . import BaseAgent
@@ -6,27 +5,31 @@ class SentimentCorrelationModel(BaseAgent):
""" """
Settings: Settings:
outside_effects_prob outside_effects_prob
anger_prob anger_prob
joy_prob joy_prob
sadness_prob sadness_prob
disgust_prob disgust_prob
""" """
def __init__(self, environment, unique_id=0, state=()): def __init__(self, environment, unique_id=0, state=()):
super().__init__(model=environment, unique_id=unique_id, state=state) super().__init__(model=environment, unique_id=unique_id, state=state)
self.outside_effects_prob = environment.environment_params['outside_effects_prob'] self.outside_effects_prob = environment.environment_params[
self.anger_prob = environment.environment_params['anger_prob'] "outside_effects_prob"
self.joy_prob = environment.environment_params['joy_prob'] ]
self.sadness_prob = environment.environment_params['sadness_prob'] self.anger_prob = environment.environment_params["anger_prob"]
self.disgust_prob = environment.environment_params['disgust_prob'] self.joy_prob = environment.environment_params["joy_prob"]
self.state['time_awareness'] = [] self.sadness_prob = environment.environment_params["sadness_prob"]
self.disgust_prob = environment.environment_params["disgust_prob"]
self.state["time_awareness"] = []
for i in range(4): # In this model we have 4 sentiments for i in range(4): # In this model we have 4 sentiments
self.state['time_awareness'].append(0) # 0-> Anger, 1-> joy, 2->sadness, 3 -> disgust self.state["time_awareness"].append(
self.state['sentimentCorrelation'] = 0 0
) # 0-> Anger, 1-> joy, 2->sadness, 3 -> disgust
self.state["sentimentCorrelation"] = 0
def step(self): def step(self):
self.behaviour() self.behaviour()
@@ -38,65 +41,75 @@ class SentimentCorrelationModel(BaseAgent):
sad_neighbors_1_time_step = [] sad_neighbors_1_time_step = []
disgusted_neighbors_1_time_step = [] disgusted_neighbors_1_time_step = []
angry_neighbors = self.get_neighboring_agents(state_id=1) angry_neighbors = self.get_neighbors(state_id=1)
for x in angry_neighbors: for x in angry_neighbors:
if x.state['time_awareness'][0] > (self.env.now-500): if x.state["time_awareness"][0] > (self.env.now - 500):
angry_neighbors_1_time_step.append(x) angry_neighbors_1_time_step.append(x)
num_neighbors_angry = len(angry_neighbors_1_time_step) num_neighbors_angry = len(angry_neighbors_1_time_step)
joyful_neighbors = self.get_neighboring_agents(state_id=2) joyful_neighbors = self.get_neighbors(state_id=2)
for x in joyful_neighbors: for x in joyful_neighbors:
if x.state['time_awareness'][1] > (self.env.now-500): if x.state["time_awareness"][1] > (self.env.now - 500):
joyful_neighbors_1_time_step.append(x) joyful_neighbors_1_time_step.append(x)
num_neighbors_joyful = len(joyful_neighbors_1_time_step) num_neighbors_joyful = len(joyful_neighbors_1_time_step)
sad_neighbors = self.get_neighboring_agents(state_id=3) sad_neighbors = self.get_neighbors(state_id=3)
for x in sad_neighbors: for x in sad_neighbors:
if x.state['time_awareness'][2] > (self.env.now-500): if x.state["time_awareness"][2] > (self.env.now - 500):
sad_neighbors_1_time_step.append(x) sad_neighbors_1_time_step.append(x)
num_neighbors_sad = len(sad_neighbors_1_time_step) num_neighbors_sad = len(sad_neighbors_1_time_step)
disgusted_neighbors = self.get_neighboring_agents(state_id=4) disgusted_neighbors = self.get_neighbors(state_id=4)
for x in disgusted_neighbors: for x in disgusted_neighbors:
if x.state['time_awareness'][3] > (self.env.now-500): if x.state["time_awareness"][3] > (self.env.now - 500):
disgusted_neighbors_1_time_step.append(x) disgusted_neighbors_1_time_step.append(x)
num_neighbors_disgusted = len(disgusted_neighbors_1_time_step) num_neighbors_disgusted = len(disgusted_neighbors_1_time_step)
anger_prob = self.anger_prob+(len(angry_neighbors_1_time_step)*self.anger_prob) anger_prob = self.anger_prob + (
joy_prob = self.joy_prob+(len(joyful_neighbors_1_time_step)*self.joy_prob) len(angry_neighbors_1_time_step) * self.anger_prob
sadness_prob = self.sadness_prob+(len(sad_neighbors_1_time_step)*self.sadness_prob) )
disgust_prob = self.disgust_prob+(len(disgusted_neighbors_1_time_step)*self.disgust_prob) joy_prob = self.joy_prob + (len(joyful_neighbors_1_time_step) * self.joy_prob)
sadness_prob = self.sadness_prob + (
len(sad_neighbors_1_time_step) * self.sadness_prob
)
disgust_prob = self.disgust_prob + (
len(disgusted_neighbors_1_time_step) * self.disgust_prob
)
outside_effects_prob = self.outside_effects_prob outside_effects_prob = self.outside_effects_prob
num = random.random() num = self.random.random()
if num<outside_effects_prob: if num < outside_effects_prob:
self.state['id'] = random.randint(1, 4) self.state["id"] = self.random.randint(1, 4)
self.state['sentimentCorrelation'] = self.state['id'] # It is stored when it has been infected for the dynamic network self.state["sentimentCorrelation"] = self.state[
self.state['time_awareness'][self.state['id']-1] = self.env.now "id"
self.state['sentiment'] = self.state['id'] ] # It is stored when it has been infected for the dynamic network
self.state["time_awareness"][self.state["id"] - 1] = self.env.now
self.state["sentiment"] = self.state["id"]
if num < anger_prob:
if(num<anger_prob): self.state["id"] = 1
self.state["sentimentCorrelation"] = 1
self.state["time_awareness"][self.state["id"] - 1] = self.env.now
elif num < joy_prob + anger_prob and num > anger_prob:
self.state['id'] = 1 self.state["id"] = 2
self.state['sentimentCorrelation'] = 1 self.state["sentimentCorrelation"] = 2
self.state['time_awareness'][self.state['id']-1] = self.env.now self.state["time_awareness"][self.state["id"] - 1] = self.env.now
elif (num<joy_prob+anger_prob and num>anger_prob): elif num < sadness_prob + anger_prob + joy_prob and num > joy_prob + anger_prob:
self.state['id'] = 2 self.state["id"] = 3
self.state['sentimentCorrelation'] = 2 self.state["sentimentCorrelation"] = 3
self.state['time_awareness'][self.state['id']-1] = self.env.now self.state["time_awareness"][self.state["id"] - 1] = self.env.now
elif (num<sadness_prob+anger_prob+joy_prob and num>joy_prob+anger_prob): elif (
num < disgust_prob + sadness_prob + anger_prob + joy_prob
and num > sadness_prob + anger_prob + joy_prob
):
self.state['id'] = 3 self.state["id"] = 4
self.state['sentimentCorrelation'] = 3 self.state["sentimentCorrelation"] = 4
self.state['time_awareness'][self.state['id']-1] = self.env.now self.state["time_awareness"][self.state["id"] - 1] = self.env.now
elif (num<disgust_prob+sadness_prob+anger_prob+joy_prob and num>sadness_prob+anger_prob+joy_prob):
self.state['id'] = 4 self.state["sentiment"] = self.state["id"]
self.state['sentimentCorrelation'] = 4
self.state['time_awareness'][self.state['id']-1] = self.env.now
self.state['sentiment'] = self.state['id']

File diff suppressed because it is too large Load Diff

77
soil/agents/evented.py Normal file
View File

@@ -0,0 +1,77 @@
from . import BaseAgent
from ..events import Message, Tell, Ask, TimedOut
from ..time import BaseCond
from functools import partial
from collections import deque
class ReceivedOrTimeout(BaseCond):
def __init__(
self, agent, expiration=None, timeout=None, check=True, ignore=False, **kwargs
):
if expiration is None:
if timeout is not None:
expiration = agent.now + timeout
self.expiration = expiration
self.ignore = ignore
self.check = check
super().__init__(**kwargs)
def expired(self, time):
return self.expiration and self.expiration < time
def ready(self, agent, time):
return len(agent._inbox) or self.expired(time)
def return_value(self, agent):
if not self.ignore and self.expired(agent.now):
raise TimedOut("No messages received")
if self.check:
agent.check_messages()
return None
def schedule_next(self, time, delta, first=False):
if self._delta is not None:
delta = self._delta
return (time + delta, self)
def __repr__(self):
return f"ReceivedOrTimeout(expires={self.expiration})"
class EventedAgent(BaseAgent):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self._inbox = deque()
self._processed = 0
def on_receive(self, *args, **kwargs):
pass
def received(self, *args, **kwargs):
return ReceivedOrTimeout(self, *args, **kwargs)
def tell(self, msg, sender=None):
self._inbox.append(Tell(timestamp=self.now, payload=msg, sender=sender))
def ask(self, msg, timeout=None, **kwargs):
ask = Ask(timestamp=self.now, payload=msg, sender=self)
self._inbox.append(ask)
expiration = float("inf") if timeout is None else self.now + timeout
return ask.replied(expiration=expiration, **kwargs)
def check_messages(self):
changed = False
while self._inbox:
msg = self._inbox.popleft()
self._processed += 1
if msg.expired(self.now):
continue
changed = True
reply = self.on_receive(msg.payload, sender=msg.sender)
if isinstance(msg, Ask):
msg.reply = reply
return changed
Evented = EventedAgent

140
soil/agents/fsm.py Normal file
View File

@@ -0,0 +1,140 @@
from . import MetaAgent, BaseAgent
from functools import partial, wraps
import inspect
def state(name=None):
def decorator(func, name=None):
"""
A state function should return either a state id, or a tuple (state_id, when)
The default value for state_id is the current state id.
The default value for when is the interval defined in the environment.
"""
if inspect.isgeneratorfunction(func):
orig_func = func
@wraps(func)
def func(self):
while True:
if not self._coroutine:
self._coroutine = orig_func(self)
try:
if self._last_except:
n = self._coroutine.throw(self._last_except)
else:
n = self._coroutine.send(self._last_return)
if n:
return None, n
return n
except StopIteration as ex:
self._coroutine = None
next_state = ex.value
if next_state is not None:
self._set_state(next_state)
return next_state
finally:
self._last_return = None
self._last_except = None
func.id = name or func.__name__
func.is_default = False
return func
if callable(name):
return decorator(name)
else:
return partial(decorator, name=name)
def default_state(func):
func.is_default = True
return func
class MetaFSM(MetaAgent):
def __new__(mcls, name, bases, namespace):
states = {}
# Re-use states from inherited classes
default_state = None
for i in bases:
if isinstance(i, MetaFSM):
for state_id, state in i._states.items():
if state.is_default:
default_state = state
states[state_id] = state
# Add new states
for attr, func in namespace.items():
if hasattr(func, "id"):
if func.is_default:
default_state = func
states[func.id] = func
namespace.update(
{
"_default_state": default_state,
"_states": states,
}
)
return super(MetaFSM, mcls).__new__(
mcls=mcls, name=name, bases=bases, namespace=namespace
)
class FSM(BaseAgent, metaclass=MetaFSM):
def __init__(self, **kwargs):
super(FSM, self).__init__(**kwargs)
if not hasattr(self, "state_id"):
if not self._default_state:
raise ValueError(
"No default state specified for {}".format(self.unique_id)
)
self.state_id = self._default_state.id
self._coroutine = None
self._set_state(self.state_id)
def step(self):
self.debug(f"Agent {self.unique_id} @ state {self.state_id}")
default_interval = super().step()
next_state = self._states[self.state_id](self)
when = None
try:
next_state, *when = next_state
if not when:
when = None
elif len(when) == 1:
when = when[0]
else:
raise ValueError(
"Too many values returned. Only state (and time) allowed"
)
except TypeError:
pass
if next_state is not None:
self._set_state(next_state)
return when or default_interval
def _set_state(self, state, when=None):
if hasattr(state, "id"):
state = state.id
if state not in self._states:
raise ValueError("{} is not a valid state".format(state))
self.state_id = state
if when is not None:
self.model.schedule.add(self, when=when)
return state
def die(self):
return self.dead, super().die()
@state
def dead(self):
return self.die()

View File

@@ -0,0 +1,85 @@
from . import BaseAgent
class NetworkAgent(BaseAgent):
def __init__(self, *args, topology, node_id, **kwargs):
super().__init__(*args, **kwargs)
assert topology is not None
assert node_id is not None
self.G = topology
assert self.G
self.node_id = node_id
def count_neighbors(self, state_id=None, **kwargs):
return len(self.get_neighbors(state_id=state_id, **kwargs))
def get_neighbors(self, **kwargs):
return list(self.iter_agents(limit_neighbors=True, **kwargs))
@property
def node(self):
return self.G.nodes[self.node_id]
def iter_agents(self, unique_id=None, *, limit_neighbors=False, **kwargs):
unique_ids = None
if isinstance(unique_id, list):
unique_ids = set(unique_id)
elif unique_id is not None:
unique_ids = set(
[
unique_id,
]
)
if limit_neighbors:
neighbor_ids = set()
for node_id in self.G.neighbors(self.node_id):
if self.G.nodes[node_id].get("agent") is not None:
neighbor_ids.add(node_id)
if unique_ids:
unique_ids = unique_ids & neighbor_ids
else:
unique_ids = neighbor_ids
if not unique_ids:
return
unique_ids = list(unique_ids)
yield from super().iter_agents(unique_id=unique_ids, **kwargs)
def subgraph(self, center=True, **kwargs):
include = [self] if center else []
G = self.G.subgraph(
n.node_id for n in list(self.get_agents(**kwargs) + include)
)
return G
def remove_node(self):
self.debug(f"Removing node for {self.unique_id}: {self.node_id}")
self.G.remove_node(self.node_id)
self.node_id = None
def add_edge(self, other, edge_attr_dict=None, *edge_attrs):
if self.node_id not in self.G.nodes(data=False):
raise ValueError(
"{} not in list of existing agents in the network".format(
self.unique_id
)
)
if other.node_id not in self.G.nodes(data=False):
raise ValueError(
"{} not in list of existing agents in the network".format(other)
)
self.G.add_edge(
self.node_id, other.node_id, edge_attr_dict=edge_attr_dict, *edge_attrs
)
def die(self, remove=True):
if not self.alive:
return None
if remove:
self.remove_node()
return super().die()
NetAgent = NetworkAgent

View File

@@ -1,206 +0,0 @@
import pandas as pd
import glob
import yaml
from os.path import join
from . import serialization
from tsih import History
def read_data(*args, group=False, **kwargs):
iterable = _read_data(*args, **kwargs)
if group:
return group_trials(iterable)
else:
return list(iterable)
def _read_data(pattern, *args, from_csv=False, process_args=None, **kwargs):
if not process_args:
process_args = {}
for folder in glob.glob(pattern):
config_file = glob.glob(join(folder, '*.yml'))[0]
config = yaml.load(open(config_file), Loader=yaml.SafeLoader)
df = None
if from_csv:
for trial_data in sorted(glob.glob(join(folder,
'*.environment.csv'))):
df = read_csv(trial_data, **kwargs)
yield config_file, df, config
else:
for trial_data in sorted(glob.glob(join(folder, '*.sqlite'))):
df = read_sql(trial_data, **kwargs)
yield config_file, df, config
def read_sql(db, *args, **kwargs):
h = History(db_path=db, backup=False, readonly=True)
df = h.read_sql(*args, **kwargs)
return df
def read_csv(filename, keys=None, convert_types=False, **kwargs):
'''
Read a CSV in canonical form: ::
<agent_id, t_step, key, value, value_type>
'''
df = pd.read_csv(filename)
if convert_types:
df = convert_types_slow(df)
if keys:
df = df[df['key'].isin(keys)]
df = process_one(df)
return df
def convert_row(row):
row['value'] = serialization.deserialize(row['value_type'], row['value'])
return row
def convert_types_slow(df):
'''
Go over every column in a dataframe and convert it to the type determined by the `get_types`
function.
This is a slow operation.
'''
dtypes = get_types(df)
for k, v in dtypes.items():
t = df[df['key']==k]
t['value'] = t['value'].astype(v)
df = df.apply(convert_row, axis=1)
return df
def split_processed(df):
env = df.loc[:, df.columns.get_level_values(1).isin(['env', 'stats'])]
agents = df.loc[:, ~df.columns.get_level_values(1).isin(['env', 'stats'])]
return env, agents
def split_df(df):
'''
Split a dataframe in two dataframes: one with the history of agents,
and one with the environment history
'''
envmask = (df['agent_id'] == 'env')
n_env = envmask.sum()
if n_env == len(df):
return df, None
elif n_env == 0:
return None, df
agents, env = [x for _, x in df.groupby(envmask)]
return env, agents
def process(df, **kwargs):
'''
Process a dataframe in canonical form ``(t_step, agent_id, key, value, value_type)`` into
two dataframes with a column per key: one with the history of the agents, and one for the
history of the environment.
'''
env, agents = split_df(df)
return process_one(env, **kwargs), process_one(agents, **kwargs)
def get_types(df):
'''
Get the value type for every key stored in a raw history dataframe.
'''
dtypes = df.groupby(by=['key'])['value_type'].unique()
return {k:v[0] for k,v in dtypes.iteritems()}
def process_one(df, *keys, columns=['key', 'agent_id'], values='value',
fill=True, index=['t_step',],
aggfunc='first', **kwargs):
'''
Process a dataframe in canonical form ``(t_step, agent_id, key, value, value_type)`` into
a dataframe with a column per key
'''
if df is None:
return df
if keys:
df = df[df['key'].isin(keys)]
df = df.pivot_table(values=values, index=index, columns=columns,
aggfunc=aggfunc, **kwargs)
if fill:
df = fillna(df)
return df
def get_count(df, *keys):
'''
For every t_step and key, get the value count.
The result is a dataframe with `t_step` as index, an a multiindex column based on `key` and the values found for each `key`.
'''
if keys:
df = df[list(keys)]
df.columns = df.columns.remove_unused_levels()
counts = pd.DataFrame()
for key in df.columns.levels[0]:
g = df[[key]].apply(pd.Series.value_counts, axis=1).fillna(0)
for value, series in g.iteritems():
counts[key, value] = series
counts.columns = pd.MultiIndex.from_tuples(counts.columns)
return counts
def get_majority(df, *keys):
'''
For every t_step and key, get the value of the majority of agents
The result is a dataframe with `t_step` as index, and columns based on `key`.
'''
df = get_count(df, *keys)
return df.stack(level=0).idxmax(axis=1).unstack()
def get_value(df, *keys, aggfunc='sum'):
'''
For every t_step and key, get the value of *numeric columns*, aggregated using a specific function.
'''
if keys:
df = df[list(keys)]
df.columns = df.columns.remove_unused_levels()
df = df.select_dtypes('number')
return df.groupby(level='key', axis=1).agg(aggfunc)
def plot_all(*args, plot_args={}, **kwargs):
'''
Read all the trial data and plot the result of applying a function on them.
'''
dfs = do_all(*args, **kwargs)
ps = []
for line in dfs:
f, df, config = line
if len(df) < 1:
continue
df.plot(title=config['name'], **plot_args)
ps.append(df)
return ps
def do_all(pattern, func, *keys, include_env=False, **kwargs):
for config_file, df, config in read_data(pattern, keys=keys):
if len(df) < 1:
continue
p = func(df, *keys, **kwargs)
yield config_file, p, config
def group_trials(trials, aggfunc=['mean', 'min', 'max', 'std']):
trials = list(trials)
trials = list(map(lambda x: x[1] if isinstance(x, tuple) else x, trials))
return pd.concat(trials).groupby(level=0).agg(aggfunc).reorder_levels([2, 0,1] ,axis=1)
def fillna(df):
new_df = df.ffill(axis=0)
return new_df

270
soil/config.py Normal file
View File

@@ -0,0 +1,270 @@
from __future__ import annotations
from enum import Enum
from pydantic import BaseModel, ValidationError, validator, root_validator
import yaml
import os
import sys
from typing import Any, Callable, Dict, List, Optional, Union, Type
from pydantic import BaseModel, Extra
from . import environment, utils
import networkx as nx
# Could use TypeAlias in python >= 3.10
nodeId = int
class Node(BaseModel):
id: nodeId
state: Optional[Dict[str, Any]] = {}
class Edge(BaseModel):
source: nodeId
target: nodeId
value: Optional[float] = 1
class Topology(BaseModel):
nodes: List[Node]
directed: bool
links: List[Edge]
class NetParams(BaseModel, extra=Extra.allow):
generator: Union[Callable, str]
n: int
class NetConfig(BaseModel):
params: Optional[NetParams]
fixed: Optional[Union[Topology, nx.Graph]]
path: Optional[str]
class Config:
arbitrary_types_allowed = True
@staticmethod
def default():
return NetConfig(topology=None, params=None)
@root_validator
def validate_all(cls, values):
if "params" not in values and "topology" not in values:
raise ValueError(
"You must specify either a topology or the parameters to generate a graph"
)
return values
class EnvConfig(BaseModel):
@staticmethod
def default():
return EnvConfig()
class SingleAgentConfig(BaseModel):
agent_class: Optional[Union[Type, str]] = None
unique_id: Optional[int] = None
topology: Optional[bool] = False
node_id: Optional[Union[int, str]] = None
state: Optional[Dict[str, Any]] = {}
class FixedAgentConfig(SingleAgentConfig):
n: Optional[int] = 1
hidden: Optional[bool] = False # Do not count this agent towards total agent count
@root_validator
def validate_all(cls, values):
if values.get("unique_id", None) is not None and values.get("n", 1) > 1:
raise ValueError(
f"An unique_id can only be provided when there is only one agent ({values.get('n')} given)"
)
return values
class OverrideAgentConfig(FixedAgentConfig):
filter: Optional[Dict[str, Any]] = None
class Strategy(Enum):
topology = "topology"
total = "total"
class AgentDistro(SingleAgentConfig):
weight: Optional[float] = 1
strategy: Strategy = Strategy.topology
class AgentConfig(SingleAgentConfig):
n: Optional[int] = None
distribution: Optional[List[AgentDistro]] = None
fixed: Optional[List[FixedAgentConfig]] = None
override: Optional[List[OverrideAgentConfig]] = None
@staticmethod
def default():
return AgentConfig()
@root_validator
def validate_all(cls, values):
if "distribution" in values and (
"n" not in values and "topology" not in values
):
raise ValueError(
"You need to provide the number of agents or a topology to extract the value from."
)
return values
class Config(BaseModel, extra=Extra.allow):
version: Optional[str] = "1"
name: str = "Unnamed Simulation"
description: Optional[str] = None
group: str = None
dir_path: Optional[str] = None
num_trials: int = 1
max_time: float = 100
max_steps: int = -1
interval: float = 1
seed: str = ""
dry_run: bool = False
model_class: Union[Type, str] = environment.Environment
model_params: Optional[Dict[str, Any]] = {}
visualization_params: Optional[Dict[str, Any]] = {}
@classmethod
def from_raw(cls, cfg):
if isinstance(cfg, Config):
return cfg
if cfg.get("version", "1") == "1" and any(
k in cfg for k in ["agents", "agent_class", "topology", "environment_class"]
):
return convert_old(cfg)
return Config(**cfg)
def convert_old(old, strict=True):
"""
Try to convert old style configs into the new format.
This is still a work in progress and might not work in many cases.
"""
utils.logger.warning(
"The old configuration format is deprecated. The converted file MAY NOT yield the right results"
)
new = old.copy()
network = {}
if "topology" in old:
del new["topology"]
network["topology"] = old["topology"]
if "network_params" in old and old["network_params"]:
del new["network_params"]
for (k, v) in old["network_params"].items():
if k == "path":
network["path"] = v
else:
network.setdefault("params", {})[k] = v
topology = None
if network:
topology = network
agents = {"fixed": [], "distribution": []}
def updated_agent(agent):
"""Convert an agent definition"""
newagent = dict(agent)
return newagent
by_weight = []
fixed = []
override = []
if "environment_agents" in new:
for agent in new["environment_agents"]:
agent.setdefault("state", {})["group"] = "environment"
if "agent_id" in agent:
agent["state"]["name"] = agent["agent_id"]
del agent["agent_id"]
agent["hidden"] = True
agent["topology"] = False
fixed.append(updated_agent(agent))
del new["environment_agents"]
if "agent_class" in old:
del new["agent_class"]
agents["agent_class"] = old["agent_class"]
if "default_state" in old:
del new["default_state"]
agents["state"] = old["default_state"]
if "network_agents" in old:
agents["topology"] = True
agents.setdefault("state", {})["group"] = "network"
for agent in new["network_agents"]:
agent = updated_agent(agent)
if "agent_id" in agent:
agent["state"]["name"] = agent["agent_id"]
del agent["agent_id"]
fixed.append(agent)
else:
by_weight.append(agent)
del new["network_agents"]
if "agent_class" in old and (not fixed and not by_weight):
agents["topology"] = True
by_weight = [{"agent_class": old["agent_class"], "weight": 1}]
# TODO: translate states properly
if "states" in old:
del new["states"]
states = old["states"]
if isinstance(states, dict):
states = states.items()
else:
states = enumerate(states)
for (k, v) in states:
override.append({"filter": {"node_id": k}, "state": v})
agents["override"] = override
agents["fixed"] = fixed
agents["distribution"] = by_weight
model_params = {}
if "environment_params" in new:
del new["environment_params"]
model_params = dict(old["environment_params"])
if "environment_class" in old:
del new["environment_class"]
new["model_class"] = old["environment_class"]
if "dump" in old:
del new["dump"]
new["dry_run"] = not old["dump"]
model_params["topology"] = topology
model_params["agents"] = agents
return Config(version="2", model_params=model_params, **new)

View File

@@ -1,26 +1,6 @@
from mesa import DataCollector as MDC from mesa import DataCollector as MDC
class SoilDataCollector(MDC): class SoilDataCollector(MDC):
def __init__(self, *args, **kwargs):
def __init__(self, environment, *args, **kwargs):
super().__init__(*args, **kwargs) super().__init__(*args, **kwargs)
# Populate model and env reporters so they have a key per
# So they can be shown in the web interface
self.environment = environment
@property
def model_vars(self):
pass
@model_vars.setter
def model_vars(self, value):
pass
@property
def agent_reporters(self):
self.model._history._
pass

190
soil/debugging.py Normal file
View File

@@ -0,0 +1,190 @@
from __future__ import annotations
import pdb
import sys
import os
from textwrap import indent
from functools import wraps
from .agents import FSM, MetaFSM
def wrapcmd(func):
@wraps(func)
def wrapper(self, arg: str, temporary=False):
sys.settrace(self.trace_dispatch)
known = globals()
known.update(self.curframe.f_globals)
known.update(self.curframe.f_locals)
known["agent"] = known.get("self", None)
known["model"] = known.get("self", {}).get("model")
known["attrs"] = arg.strip().split()
exec(func.__code__, known, known)
return wrapper
class Debug(pdb.Pdb):
def __init__(self, *args, skip_soil=False, **kwargs):
skip = kwargs.get("skip", [])
if skip_soil:
skip.append("soil")
skip.append("contextlib")
skip.append("soil.*")
skip.append("mesa.*")
super(Debug, self).__init__(*args, skip=skip, **kwargs)
self.prompt = "[soil-pdb] "
@staticmethod
def _soil_agents(model, attrs=None, pretty=True, **kwargs):
for agent in model.agents(**kwargs):
d = agent
print(" - " + indent(agent.to_str(keys=attrs, pretty=pretty), " "))
@wrapcmd
def do_soil_agents():
return Debug._soil_agents(model, attrs=attrs or None)
do_sa = do_soil_agents
@wrapcmd
def do_soil_list():
return Debug._soil_agents(model, attrs=["state_id"], pretty=False)
do_sl = do_soil_list
def do_continue_state(self, arg):
self.do_break_state(arg, temporary=True)
return self.do_continue("")
do_cs = do_continue_state
@wrapcmd
def do_soil_agent():
if not agent:
print("No agent available")
return
keys = None
if attrs:
keys = []
for k in attrs:
for key in agent.keys():
if key.startswith(k):
keys.append(key)
print(agent.to_str(pretty=True, keys=keys))
do_aa = do_soil_agent
def do_break_state(self, arg: str, instances=None, temporary=False):
"""
Break before a specified state is stepped into.
"""
klass = None
state = arg
if not state:
self.error("Specify at least a state name")
return
state, *tokens = state.lstrip().split()
if tokens:
instances = list(eval(token) for token in tokens)
colon = state.find(":")
if colon > 0:
klass = state[:colon].rstrip()
state = state[colon + 1 :].strip()
print(klass, state, tokens)
klass = eval(klass, self.curframe.f_globals, self.curframe_locals)
if klass:
klasses = [klass]
else:
klasses = [
k
for k in self.curframe.f_globals.values()
if isinstance(k, type) and issubclass(k, FSM)
]
if not klasses:
self.error("No agent classes found")
for klass in klasses:
try:
func = getattr(klass, state)
except AttributeError:
self.error(f"State {state} not found in class {klass}")
continue
if hasattr(func, "__func__"):
func = func.__func__
code = func.__code__
# use co_name to identify the bkpt (function names
# could be aliased, but co_name is invariant)
funcname = code.co_name
lineno = code.co_firstlineno
filename = code.co_filename
# Check for reasonable breakpoint
line = self.checkline(filename, lineno)
if not line:
raise ValueError("no line found")
# now set the break point
cond = None
if instances:
cond = f"self.unique_id in { repr(instances) }"
existing = self.get_breaks(filename, line)
if existing:
self.message("Breakpoint already exists at %s:%d" % (filename, line))
continue
err = self.set_break(filename, line, temporary, cond, funcname)
if err:
self.error(err)
else:
bp = self.get_breaks(filename, line)[-1]
self.message("Breakpoint %d at %s:%d" % (bp.number, bp.file, bp.line))
do_bs = do_break_state
def do_break_state_self(self, arg: str, temporary=False):
"""
Break before a specified state is stepped into, for the current agent
"""
agent = self.curframe.f_locals.get("self")
if not agent:
self.error("No current agent.")
self.error("Try this again when the debugger is stopped inside an agent")
return
arg = f"{agent.__class__.__name__}:{ arg } {agent.unique_id}"
return self.do_break_state(arg)
do_bss = do_break_state_self
debugger = None
def set_trace(frame=None, **kwargs):
global debugger
if debugger is None:
debugger = Debug(**kwargs)
frame = frame or sys._getframe().f_back
debugger.set_trace(frame)
def post_mortem(traceback=None, **kwargs):
global debugger
if debugger is None:
debugger = Debug(**kwargs)
t = sys.exc_info()[2]
debugger.reset()
debugger.interaction(None, t)

View File

@@ -1,377 +1,339 @@
from __future__ import annotations
import os import os
import sqlite3 import sqlite3
import csv
import math import math
import random import logging
import yaml import inspect
import tempfile
import pandas as pd from typing import Any, Dict, Optional, Union
from collections import namedtuple
from time import time as current_time from time import time as current_time
from copy import deepcopy from copy import deepcopy
from networkx.readwrite import json_graph from networkx.readwrite import json_graph
import networkx as nx import networkx as nx
from tsih import History, Record, Key, NoHistory
from mesa import Model from mesa import Model
from mesa.datacollection import DataCollector
from . import serialization, agents, analysis, utils, time from . import agents as agentmod, config, serialization, utils, time, network, events
# These properties will be copied when pickling/unpickling the environment
_CONFIG_PROPS = [ 'name',
'states',
'default_state',
'interval',
]
class Environment(Model): class BaseEnvironment(Model):
""" """
The environment is key in a simulation. It contains the network topology, The environment is key in a simulation. It controls how agents interact,
a reference to network and environment agents, as well as the environment and what information is available to them.
params, which are used as shared state between agents.
This is an opinionated version of `mesa.Model` class, which adds many
convenience methods and abstractions.
The environment parameters and the state of every agent can be accessed The environment parameters and the state of every agent can be accessed
both by using the environment as a dictionary or with the environment's both by using the environment as a dictionary and with the environment's
:meth:`soil.environment.Environment.get` method. :meth:`soil.environment.Environment.get` method.
""" """
def __init__(self, name=None, def __init__(
network_agents=None, self,
environment_agents=None, id="unnamed_env",
states=None, seed="default",
default_state=None, schedule_class=time.TimedActivation,
interval=1, dir_path=None,
network_params=None, interval=1,
seed=None, agent_class=None,
topology=None, agents: [tuple[type, Dict[str, Any]]] = {},
schedule=None, agent_reporters: Optional[Any] = None,
initial_time=0, model_reporters: Optional[Any] = None,
environment_params=None, tables: Optional[Any] = None,
history=True, **env_params,
dir_path=None, ):
**kwargs):
super().__init__(seed=seed)
self.env_params = env_params or {}
super().__init__() self.current_id = -1
self.schedule = schedule self.id = id
if schedule is None:
self.schedule = time.TimedActivation()
self.name = name or 'UnnamedEnvironment' self.dir_path = dir_path or os.getcwd()
seed = seed or current_time()
random.seed(seed)
if isinstance(states, list):
states = dict(enumerate(states))
self.states = deepcopy(states) if states else {}
self.default_state = deepcopy(default_state) or {}
if topology is None: if schedule_class is None:
network_params = network_params or {} schedule_class = time.TimedActivation
topology = serialization.load_network(network_params, else:
dir_path=dir_path) schedule_class = serialization.deserialize(schedule_class)
if not topology: self.schedule = schedule_class(self)
topology = nx.Graph()
self.G = nx.Graph(topology)
self.agent_class = agent_class or agentmod.BaseAgent
self.environment_params = environment_params or {}
self.environment_params.update(kwargs)
self._env_agents = {}
self.interval = interval self.interval = interval
if history: self.init_agents(agents)
history = History
else:
history = NoHistory
self._history = history(name=self.name,
backup=True)
self['SEED'] = seed
if network_agents: self.logger = utils.logger.getChild(self.id)
distro = agents.calculate_distribution(network_agents)
self.network_agents = agents._convert_agent_types(distro)
else:
self.network_agents = []
environment_agents = environment_agents or [] self.datacollector = DataCollector(
if environment_agents: model_reporters=model_reporters,
distro = agents.calculate_distribution(environment_agents) agent_reporters=agent_reporters,
environment_agents = agents._convert_agent_types(distro) tables=tables,
self.environment_agents = environment_agents )
def _agent_from_dict(self, agent):
"""
Translate an agent dictionary into an agent
"""
agent = dict(**agent)
cls = agent.pop("agent_class", None) or self.agent_class
unique_id = agent.pop("unique_id", None)
if unique_id is None:
unique_id = self.next_id()
return serialization.deserialize(cls)(unique_id=unique_id, model=self, **agent)
def init_agents(self, agents: Union[config.AgentConfig, [Dict[str, Any]]] = {}):
"""
Initialize the agents in the model from either a `soil.config.AgentConfig` or a list of
dictionaries that each describes an agent.
If given a list of dictionaries, an agent will be created for each dictionary. The agent
class can be specified through the `agent_class` key. The rest of the items will be used
as parameters to the agent.
"""
if not agents:
return
lst = agents
override = []
if not isinstance(lst, list):
if not isinstance(agents, config.AgentConfig):
lst = config.AgentConfig(**agents)
if lst.override:
override = lst.override
lst = self._agent_dict_from_config(lst)
# TODO: check override is working again. It cannot (easily) be part of agents.from_config anymore,
# because it needs attribute such as unique_id, which are only present after init
new_agents = [self._agent_from_dict(agent) for agent in lst]
for a in new_agents:
self.schedule.add(a)
for rule in override:
for agent in agentmod.filter_agents(self.schedule._agents, **rule.filter):
for attr, value in rule.state.items():
setattr(agent, attr, value)
def _agent_dict_from_config(self, cfg):
return agentmod.from_config(cfg, random=self.random)
@property
def agents(self):
return agentmod.AgentView(self.schedule._agents)
def find_one(self, *args, **kwargs):
return agentmod.AgentView(self.schedule._agents).one(*args, **kwargs)
def count_agents(self, *args, **kwargs):
return sum(1 for i in self.agents(*args, **kwargs))
@property @property
def now(self): def now(self):
if self.schedule: if self.schedule:
return self.schedule.time return self.schedule.time
raise Exception('The environment has not been scheduled, so it has no sense of time') raise Exception(
"The environment has not been scheduled, so it has no sense of time"
)
@property def add_agent(self, unique_id=None, **kwargs):
def agents(self): if unique_id is None:
yield from self.environment_agents unique_id = self.next_id()
yield from self.network_agents
@property kwargs["unique_id"] = unique_id
def environment_agents(self): a = self._agent_from_dict(kwargs)
for ref in self._env_agents.values():
yield ref
@environment_agents.setter
def environment_agents(self, environment_agents):
self._environment_agents = environment_agents
self._env_agents = agents._definition_to_dict(definition=environment_agents)
@property
def network_agents(self):
for i in self.G.nodes():
node = self.G.nodes[i]
if 'agent' in node:
yield node['agent']
@network_agents.setter
def network_agents(self, network_agents):
self._network_agents = network_agents
for ix in self.G.nodes():
self.init_agent(ix, agent_definitions=network_agents)
def init_agent(self, agent_id, agent_definitions):
node = self.G.nodes[agent_id]
init = False
state = dict(node)
agent_type = None
if 'agent_type' in self.states.get(agent_id, {}):
agent_type = self.states[agent_id]['agent_type']
elif 'agent_type' in node:
agent_type = node['agent_type']
elif 'agent_type' in self.default_state:
agent_type = self.default_state['agent_type']
if agent_type:
agent_type = agents.deserialize_type(agent_type)
elif agent_definitions:
agent_type, state = agents._agent_from_definition(agent_definitions, unique_id=agent_id)
else:
serialization.logger.debug('Skipping node {}'.format(agent_id))
return
return self.set_agent(agent_id, agent_type, state)
def set_agent(self, agent_id, agent_type, state=None):
node = self.G.nodes[agent_id]
defstate = deepcopy(self.default_state) or {}
defstate.update(self.states.get(agent_id, {}))
defstate.update(node.get('state', {}))
if state:
defstate.update(state)
a = None
if agent_type:
state = defstate
a = agent_type(model=self,
unique_id=agent_id)
for (k, v) in getattr(a, 'defaults', {}).items():
if not hasattr(a, k) or getattr(a, k) is None:
setattr(a, k, v)
for (k, v) in state.items():
setattr(a, k, v)
node['agent'] = a
self.schedule.add(a) self.schedule.add(a)
return a return a
def add_node(self, agent_type, state=None): def log(self, message, *args, level=logging.INFO, **kwargs):
agent_id = int(len(self.G.nodes())) if not self.logger.isEnabledFor(level):
self.G.add_node(agent_id) return
a = self.set_agent(agent_id, agent_type, state) message = message + " ".join(str(i) for i in args)
a['visible'] = True message = " @{:>3}: {}".format(self.now, message)
return a for k, v in kwargs:
message += " {k}={v} ".format(k, v)
def add_edge(self, agent1, agent2, start=None, **attrs): extra = {}
if hasattr(agent1, 'id'): extra["now"] = self.now
agent1 = agent1.id extra["id"] = self.id
if hasattr(agent2, 'id'): return self.logger.log(level, message, extra=extra)
agent2 = agent2.id
start = start or self.now
return self.G.add_edge(agent1, agent2, **attrs)
def step(self): def step(self):
"""
Advance one step in the simulation, and update the data collection and scheduler appropriately
"""
super().step() super().step()
self.datacollector.collect(self) self.logger.info(
f"--- Step: {self.schedule.steps:^5} - Time: {self.now:^5} ---"
)
self.schedule.step() self.schedule.step()
self.datacollector.collect(self)
def run(self, until, *args, **kwargs):
self._save_state()
while self.schedule.next_time <= until and not math.isinf(self.schedule.next_time):
self.schedule.step(until=until)
utils.logger.debug(f'Simulation step {self.schedule.time}/{until}. Next: {self.schedule.next_time}')
self._history.flush_cache()
def _save_state(self, now=None):
serialization.logger.debug('Saving state @{}'.format(self.now))
self._history.save_records(self.state_to_tuples(now=now))
def __getitem__(self, key):
if isinstance(key, tuple):
self._history.flush_cache()
return self._history[key]
return self.environment_params[key]
def __setitem__(self, key, value):
if isinstance(key, tuple):
k = Key(*key)
self._history.save_record(*k,
value=value)
return
self.environment_params[key] = value
self._history.save_record(dict_id='env',
t_step=self.now,
key=key,
value=value)
def __contains__(self, key): def __contains__(self, key):
return key in self.environment_params return key in self.env_params
def get(self, key, default=None): def get(self, key, default=None):
''' """
Get the value of an environment attribute in a Get the value of an environment attribute.
given point in the simulation (history). Return `default` if the value is not set.
If key is an attribute name, this method returns """
the current value. return self.env_params.get(key, default)
To get values at other times, use a
:meth: `soil.history.Key` tuple.
'''
return self[key] if key in self else default
def get_agent(self, agent_id): def __getitem__(self, key):
return self.G.nodes[agent_id]['agent'] return self.env_params.get(key)
def get_agents(self, nodes=None): def __setitem__(self, key, value):
if nodes is None: return self.env_params.__setitem__(key, value)
return self.agents
return (self.G.nodes[i]['agent'] for i in nodes)
def dump_csv(self, f): def __str__(self):
with utils.open_or_reuse(f, 'w') as f: return str(self.env_params)
cr = csv.writer(f)
cr.writerow(('agent_id', 't_step', 'key', 'value'))
for i in self.history_to_tuples():
cr.writerow(i)
def dump_gexf(self, f):
G = self.history_to_graph()
# Workaround for geometric models
# See soil/soil#4
for node in G.nodes():
if 'pos' in G.nodes[node]:
G.nodes[node]['viz'] = {"position": {"x": G.nodes[node]['pos'][0], "y": G.nodes[node]['pos'][1], "z": 0.0}}
del (G.nodes[node]['pos'])
nx.write_gexf(G, f, version="1.2draft") class NetworkEnvironment(BaseEnvironment):
"""
The NetworkEnvironment is an environment that includes one or more networkx.Graph intances
and methods to associate agents to nodes and vice versa.
"""
def dump(self, *args, formats=None, **kwargs): def __init__(
if not formats: self, *args, topology: Union[config.NetConfig, nx.Graph] = None, **kwargs
return ):
functions = { agents = kwargs.pop("agents", None)
'csv': self.dump_csv, super().__init__(*args, agents=None, **kwargs)
'gexf': self.dump_gexf
}
for f in formats:
if f in functions:
functions[f](*args, **kwargs)
else:
raise ValueError('Unknown format: {}'.format(f))
def dump_sqlite(self, f): self._set_topology(topology)
return self._history.dump(f)
def state_to_tuples(self, now=None): self.init_agents(agents)
if now is None:
now = self.now
for k, v in self.environment_params.items():
yield Record(dict_id='env',
t_step=now,
key=k,
value=v)
for agent in self.agents:
for k, v in agent.state.items():
yield Record(dict_id=agent.id,
t_step=now,
key=k,
value=v)
def history_to_tuples(self): def init_agents(self, *args, **kwargs):
return self._history.to_tuples() """Initialize the agents from a"""
super().init_agents(*args, **kwargs)
for agent in self.schedule._agents.values():
if hasattr(agent, "node_id"):
self._init_node(agent)
def history_to_graph(self): def _init_node(self, agent):
G = nx.Graph(self.G) """
Make sure the node for a given agent has the proper attributes.
"""
self.G.nodes[agent.node_id]["agent"] = agent
for agent in self.network_agents: def _agent_dict_from_config(self, cfg):
return agentmod.from_config(cfg, topology=self.G, random=self.random)
attributes = {'agent': str(agent.__class__)} def _agent_from_dict(self, agent, unique_id=None):
lastattributes = {} agent = dict(agent)
spells = []
lastvisible = False if not agent.get("topology", False):
laststep = None return super()._agent_from_dict(agent)
history = self[agent.id, None, None]
if not history: if unique_id is None:
unique_id = self.next_id()
node_id = agent.get("node_id", None)
if node_id is None:
node_id = network.find_unassigned(self.G, random=self.random)
self.G.nodes[node_id]["agent"] = None
agent["node_id"] = node_id
agent["unique_id"] = unique_id
agent["topology"] = self.G
node_attrs = self.G.nodes[node_id]
node_attrs.update(agent)
agent = node_attrs
a = super()._agent_from_dict(agent)
self._init_node(a)
return a
def _set_topology(self, cfg=None, dir_path=None):
if cfg is None:
cfg = nx.Graph()
elif not isinstance(cfg, nx.Graph):
cfg = network.from_config(cfg, dir_path=dir_path or self.dir_path)
self.G = cfg
@property
def network_agents(self):
for a in self.schedule._agents:
if isinstance(a, agentmod.NetworkAgent):
yield a
def add_node(self, agent_class, unique_id=None, node_id=None, **kwargs):
if unique_id is None:
unique_id = self.next_id()
if node_id is None:
node_id = network.find_unassigned(
G=self.G, shuffle=True, random=self.random
)
if node_id is None:
node_id = f"node_for_{unique_id}"
if node_id not in self.G.nodes:
self.G.add_node(node_id)
assert "agent" not in self.G.nodes[node_id]
self.G.nodes[node_id]["agent"] = None # Reserve
a = self.add_agent(
unique_id=unique_id,
agent_class=agent_class,
topology=self.G,
node_id=node_id,
**kwargs,
)
a["visible"] = True
return a
def add_agent(self, *args, **kwargs):
a = super().add_agent(*args, **kwargs)
if "node_id" in a:
assert self.G.nodes[a.node_id]["agent"] == a
return a
def agent_for_node_id(self, node_id):
return self.G.nodes[node_id].get("agent")
def populate_network(self, agent_class, weights=None, **agent_params):
if not hasattr(agent_class, "len"):
agent_class = [agent_class]
weights = None
for (node_id, node) in self.G.nodes(data=True):
if "agent" in node:
continue continue
for t_step, attribute, value in sorted(list(history)): a_class = self.random.choices(agent_class, weights)[0]
if attribute == 'visible': self.add_agent(node_id=node_id, agent_class=a_class, **agent_params)
nowvisible = value
if nowvisible and not lastvisible:
laststep = t_step
if not nowvisible and lastvisible:
spells.append((laststep, t_step))
lastvisible = nowvisible
continue
key = 'attr_' + attribute
if key not in attributes:
attributes[key] = list()
if key not in lastattributes:
lastattributes[key] = (value, t_step)
elif lastattributes[key][0] != value:
last_value, laststep = lastattributes[key]
commit_value = (last_value, laststep, t_step)
if key not in attributes:
attributes[key] = list()
attributes[key].append(commit_value)
lastattributes[key] = (value, t_step)
for k, v in lastattributes.items():
attributes[k].append((v[0], v[1], None))
if lastvisible:
spells.append((laststep, None))
if spells:
G.add_node(agent.id, spells=spells, **attributes)
else:
G.add_node(agent.id, **attributes)
return G
def __getstate__(self):
state = {}
for prop in _CONFIG_PROPS:
state[prop] = self.__dict__[prop]
state['G'] = json_graph.node_link_data(self.G)
state['environment_agents'] = self._env_agents
state['history'] = self._history
state['schedule'] = self.schedule
return state
def __setstate__(self, state):
for prop in _CONFIG_PROPS:
self.__dict__[prop] = state[prop]
self._env_agents = state['environment_agents']
self.G = json_graph.node_link_graph(state['G'])
self._history = state['history']
# self._env = None
self.schedule = state['schedule']
self._queue = []
SoilEnvironment = Environment class EventedEnvironment(BaseEnvironment):
def broadcast(self, msg, sender=None, expiration=None, ttl=None, **kwargs):
for agent in self.agents(**kwargs):
if agent == sender:
continue
self.logger.info(f"Telling {repr(agent)}: {msg} ttl={ttl}")
try:
inbox = agent._inbox
except AttributeError:
self.logger.info(
f"Agent {agent.unique_id} cannot receive events because it does not have an inbox"
)
continue
# Allow for AttributeError exceptions in this part of the code
inbox.append(
events.Tell(
payload=msg,
sender=sender,
expiration=expiration if ttl is None else self.now + ttl,
)
)
class Environment(NetworkEnvironment, EventedEnvironment):
"""Default environment class, has both network and event capabilities"""

56
soil/events.py Normal file
View File

@@ -0,0 +1,56 @@
from .time import BaseCond
from dataclasses import dataclass, field
from typing import Any
from uuid import uuid4
class Event:
pass
@dataclass
class Message:
payload: Any
sender: Any = None
expiration: float = None
timestamp: float = None
id: int = field(default_factory=uuid4)
def expired(self, when):
return self.expiration is not None and self.expiration < when
class Reply(Message):
source: Message
class ReplyCond(BaseCond):
def __init__(self, ask, *args, **kwargs):
self._ask = ask
super().__init__(*args, **kwargs)
def ready(self, agent, time):
return self._ask.reply is not None or self._ask.expired(time)
def return_value(self, agent):
if self._ask.expired(agent.now):
raise TimedOut()
return self._ask.reply
def __repr__(self):
return f"ReplyCond({self._ask.id})"
class Ask(Message):
reply: Message = None
def replied(self, expiration=None):
return ReplyCond(self)
class Tell(Message):
pass
class TimedOut(Exception):
pass

View File

@@ -1,17 +1,20 @@
import os import os
import csv as csvlib import sys
import time from time import time as current_time
from io import BytesIO from io import BytesIO
from sqlalchemy import create_engine
from textwrap import dedent, indent
import matplotlib.pyplot as plt import matplotlib.pyplot as plt
import networkx as nx import networkx as nx
from .serialization import deserialize from .serialization import deserialize
from .utils import open_or_reuse, logger, timer from .utils import try_backup, open_or_reuse, logger, timer
from . import utils from . import utils, network
class DryRunner(BytesIO): class DryRunner(BytesIO):
@@ -22,50 +25,58 @@ class DryRunner(BytesIO):
def write(self, txt): def write(self, txt):
if self.__copy_to: if self.__copy_to:
self.__copy_to.write('{}:::{}'.format(self.__fname, txt)) self.__copy_to.write("{}:::{}".format(self.__fname, txt))
try: try:
super().write(txt) super().write(txt)
except TypeError: except TypeError:
super().write(bytes(txt, 'utf-8')) super().write(bytes(txt, "utf-8"))
def close(self): def close(self):
content = '(binary data not shown)' content = "(binary data not shown)"
try: try:
content = self.getvalue().decode() content = self.getvalue().decode()
except UnicodeDecodeError: except UnicodeDecodeError:
pass pass
logger.info('**Not** written to {} (dry run mode):\n\n{}\n\n'.format(self.__fname, content)) logger.info(
"**Not** written to {} (dry run mode):\n\n{}\n\n".format(
self.__fname, content
)
)
super().close() super().close()
class Exporter: class Exporter:
''' """
Interface for all exporters. It is not necessary, but it is useful Interface for all exporters. It is not necessary, but it is useful
if you don't plan to implement all the methods. if you don't plan to implement all the methods.
''' """
def __init__(self, simulation, outdir=None, dry_run=None, copy_to=None): def __init__(self, simulation, outdir=None, dry_run=None, copy_to=None):
self.simulation = simulation self.simulation = simulation
outdir = outdir or os.path.join(os.getcwd(), 'soil_output') outdir = outdir or os.path.join(os.getcwd(), "soil_output")
self.outdir = os.path.join(outdir, self.outdir = os.path.join(outdir, simulation.group or "", simulation.name)
simulation.group or '',
simulation.name)
self.dry_run = dry_run self.dry_run = dry_run
if copy_to is None and dry_run:
copy_to = sys.stdout
self.copy_to = copy_to self.copy_to = copy_to
def start(self): def sim_start(self):
'''Method to call when the simulation starts''' """Method to call when the simulation starts"""
pass pass
def end(self, stats): def sim_end(self):
'''Method to call when the simulation ends''' """Method to call when the simulation ends"""
pass pass
def trial(self, env, stats): def trial_start(self, env):
'''Method to call when a trial ends''' """Method to call when a trial start"""
pass pass
def output(self, f, mode='w', **kwargs): def trial_end(self, env):
"""Method to call when a trial ends"""
pass
def output(self, f, mode="w", **kwargs):
if self.dry_run: if self.dry_run:
f = DryRunner(f, copy_to=self.copy_to) f = DryRunner(f, copy_to=self.copy_to)
else: else:
@@ -76,83 +87,127 @@ class Exporter:
pass pass
return open_or_reuse(f, mode=mode, **kwargs) return open_or_reuse(f, mode=mode, **kwargs)
def get_dfs(self, env):
yield from get_dc_dfs(env.datacollector, trial_id=env.id)
def get_dc_dfs(dc, trial_id=None):
dfs = {
"env": dc.get_model_vars_dataframe(),
"agents": dc.get_agent_vars_dataframe(),
}
for table_name in dc.tables:
dfs[table_name] = dc.get_table_dataframe(table_name)
if trial_id:
for (name, df) in dfs.items():
df["trial_id"] = trial_id
yield from dfs.items()
class default(Exporter): class default(Exporter):
'''Default exporter. Writes sqlite results, as well as the simulation YAML''' """Default exporter. Writes sqlite results, as well as the simulation YAML"""
def start(self): def sim_start(self):
if not self.dry_run: if self.dry_run:
logger.info('Dumping results to %s', self.outdir) logger.info("NOT dumping results")
self.simulation.dump_yaml(outdir=self.outdir) return
else: logger.info("Dumping results to %s", self.outdir)
logger.info('NOT dumping results') with self.output(self.simulation.name + ".dumped.yml") as f:
f.write(self.simulation.to_yaml())
self.dbpath = os.path.join(self.outdir, f"{self.simulation.name}.sqlite")
try_backup(self.dbpath, remove=True)
def trial(self, env, stats): def trial_end(self, env):
if not self.dry_run: if self.dry_run:
with timer('Dumping simulation {} trial {}'.format(self.simulation.name, logger.info("Running in DRY_RUN mode, the database will NOT be created")
env.name)): return
with self.output('{}.sqlite'.format(env.name), mode='wb') as f:
env.dump_sqlite(f)
def end(self, stats): with timer(
with timer('Dumping simulation {}\'s stats'.format(self.simulation.name)): "Dumping simulation {} trial {}".format(self.simulation.name, env.id)
with self.output('{}.sqlite'.format(self.simulation.name), mode='wb') as f: ):
self.simulation.dump_sqlite(f)
engine = create_engine(f"sqlite:///{self.dbpath}", echo=False)
for (t, df) in self.get_dfs(env):
df.to_sql(t, con=engine, if_exists="append")
class csv(Exporter): class csv(Exporter):
'''Export the state of each environment (and its agents) in a separate CSV file'''
def trial(self, env, stats):
with timer('[CSV] Dumping simulation {} trial {} @ dir {}'.format(self.simulation.name,
env.name,
self.outdir)):
with self.output('{}.csv'.format(env.name)) as f:
env.dump_csv(f)
with self.output('{}.stats.csv'.format(env.name)) as f: """Export the state of each environment (and its agents) in a separate CSV file"""
statwriter = csvlib.writer(f, delimiter='\t', quotechar='"', quoting=csvlib.QUOTE_ALL)
for stat in stats: def trial_end(self, env):
statwriter.writerow(stat) with timer(
"[CSV] Dumping simulation {} trial {} @ dir {}".format(
self.simulation.name, env.id, self.outdir
)
):
for (df_name, df) in self.get_dfs(env):
with self.output("{}.{}.csv".format(env.id, df_name)) as f:
df.to_csv(f)
# TODO: reimplement GEXF exporting without history
class gexf(Exporter): class gexf(Exporter):
def trial(self, env, stats): def trial_end(self, env):
if self.dry_run: if self.dry_run:
logger.info('Not dumping GEXF in dry_run mode') logger.info("Not dumping GEXF in dry_run mode")
return return
with timer('[GEXF] Dumping simulation {} trial {}'.format(self.simulation.name, with timer(
env.name)): "[GEXF] Dumping simulation {} trial {}".format(self.simulation.name, env.id)
with self.output('{}.gexf'.format(env.name), mode='wb') as f: ):
env.dump_gexf(f) with self.output("{}.gexf".format(env.id), mode="wb") as f:
network.dump_gexf(env.history_to_graph(), f)
self.dump_gexf(env, f)
class dummy(Exporter): class dummy(Exporter):
def sim_start(self):
with self.output("dummy", "w") as f:
f.write("simulation started @ {}\n".format(current_time()))
def start(self): def trial_start(self, env):
with self.output('dummy', 'w') as f: with self.output("dummy", "w") as f:
f.write('simulation started @ {}\n'.format(time.time())) f.write("trial started@ {}\n".format(current_time()))
def trial(self, env, stats): def trial_end(self, env):
with self.output('dummy', 'w') as f: with self.output("dummy", "w") as f:
for i in env.history_to_tuples(): f.write("trial ended@ {}\n".format(current_time()))
f.write(','.join(map(str, i)))
f.write('\n')
def sim(self, stats):
with self.output('dummy', 'a') as f:
f.write('simulation ended @ {}\n'.format(time.time()))
def sim_end(self):
with self.output("dummy", "a") as f:
f.write("simulation ended @ {}\n".format(current_time()))
class graphdrawing(Exporter): class graphdrawing(Exporter):
def trial_end(self, env):
def trial(self, env, stats):
# Outside effects # Outside effects
f = plt.figure() f = plt.figure()
nx.draw(env.G, node_size=10, width=0.2, pos=nx.spring_layout(env.G, scale=100), ax=f.add_subplot(111)) nx.draw(
with open('graph-{}.png'.format(env.name)) as f: env.G,
node_size=10,
width=0.2,
pos=nx.spring_layout(env.G, scale=100),
ax=f.add_subplot(111),
)
with open("graph-{}.png".format(env.id)) as f:
f.savefig(f) f.savefig(f)
class summary(Exporter):
"""Print a summary of each trial to sys.stdout"""
def trial_end(self, env):
for (t, df) in self.get_dfs(env):
if not len(df):
continue
msg = indent(str(df.describe()), " ")
logger.info(
dedent(
f"""
Dataframe {t}:
"""
)
+ msg
)

83
soil/network.py Normal file
View File

@@ -0,0 +1,83 @@
from __future__ import annotations
from typing import Dict
import os
import sys
import random
import networkx as nx
from . import config, serialization, basestring
def from_config(cfg: config.NetConfig, dir_path: str = None):
if not isinstance(cfg, config.NetConfig):
cfg = config.NetConfig(**cfg)
if cfg.path:
path = cfg.path
if dir_path and not os.path.isabs(path):
path = os.path.join(dir_path, path)
extension = os.path.splitext(path)[1][1:]
kwargs = {}
if extension == "gexf":
kwargs["version"] = "1.2draft"
kwargs["node_type"] = int
try:
method = getattr(nx.readwrite, "read_" + extension)
except AttributeError:
raise AttributeError("Unknown format")
return method(path, **kwargs)
if cfg.params:
net_args = cfg.params.dict()
net_gen = net_args.pop("generator")
if dir_path not in sys.path:
sys.path.append(dir_path)
method = serialization.deserializer(
net_gen,
known_modules=[
"networkx.generators",
],
)
return method(**net_args)
if isinstance(cfg.fixed, config.Topology):
cfg = cfg.fixed.dict()
if isinstance(cfg, str) or isinstance(cfg, dict):
return nx.json_graph.node_link_graph(cfg)
return nx.Graph()
def find_unassigned(G, shuffle=False, random=random):
"""
Link an agent to a node in a topology.
If node_id is None, a node without an agent_id will be found.
"""
candidates = list(G.nodes(data=True))
if shuffle:
random.shuffle(candidates)
for next_id, data in candidates:
if "agent" not in data:
return next_id
return None
def dump_gexf(G, f):
for node in G.nodes():
if "pos" in G.nodes[node]:
G.nodes[node]["viz"] = {
"position": {
"x": G.nodes[node]["pos"][0],
"y": G.nodes[node]["pos"][1],
"z": 0.0,
}
}
del G.nodes[node]["pos"]
nx.write_gexf(G, f, version="1.2draft")

View File

@@ -2,58 +2,27 @@ import os
import logging import logging
import ast import ast
import sys import sys
import re
import importlib import importlib
from glob import glob from glob import glob
from itertools import product, chain from itertools import product, chain
from .config import Config
import yaml import yaml
import networkx as nx import networkx as nx
from jinja2 import Template from jinja2 import Template
logger = logging.getLogger('soil') logger = logging.getLogger("soil")
def load_network(network_params, dir_path=None):
G = nx.Graph()
if 'path' in network_params:
path = network_params['path']
if dir_path and not os.path.isabs(path):
path = os.path.join(dir_path, path)
extension = os.path.splitext(path)[1][1:]
kwargs = {}
if extension == 'gexf':
kwargs['version'] = '1.2draft'
kwargs['node_type'] = int
try:
method = getattr(nx.readwrite, 'read_' + extension)
except AttributeError:
raise AttributeError('Unknown format')
G = method(path, **kwargs)
elif 'generator' in network_params:
net_args = network_params.copy()
net_gen = net_args.pop('generator')
if dir_path not in sys.path:
sys.path.append(dir_path)
method = deserializer(net_gen,
known_modules=['networkx.generators',])
G = method(**net_args)
return G
def load_file(infile): def load_file(infile):
folder = os.path.dirname(infile) folder = os.path.dirname(infile)
if folder not in sys.path: if folder not in sys.path:
sys.path.append(folder) sys.path.append(folder)
with open(infile, 'r') as f: with open(infile, "r") as f:
return list(chain.from_iterable(map(expand_template, load_string(f)))) return list(chain.from_iterable(map(expand_template, load_string(f))))
@@ -62,14 +31,15 @@ def load_string(string):
def expand_template(config): def expand_template(config):
if 'template' not in config: if "template" not in config:
yield config yield config
return return
if 'vars' not in config: if "vars" not in config:
raise ValueError(('You must provide a definition of variables' raise ValueError(
' for the template.')) ("You must provide a definition of variables" " for the template.")
)
template = config['template'] template = config["template"]
if not isinstance(template, str): if not isinstance(template, str):
template = yaml.dump(template) template = yaml.dump(template)
@@ -81,9 +51,9 @@ def expand_template(config):
blank_str = template.render({k: 0 for k in params[0].keys()}) blank_str = template.render({k: 0 for k in params[0].keys()})
blank = list(load_string(blank_str)) blank = list(load_string(blank_str))
if len(blank) > 1: if len(blank) > 1:
raise ValueError('Templates must not return more than one configuration') raise ValueError("Templates must not return more than one configuration")
if 'name' in blank[0]: if "name" in blank[0]:
raise ValueError('Templates cannot be named, use group instead') raise ValueError("Templates cannot be named, use group instead")
for ps in params: for ps in params:
string = template.render(ps) string = template.render(ps)
@@ -92,59 +62,64 @@ def expand_template(config):
def params_for_template(config): def params_for_template(config):
sampler_config = config.get('sampler', {'N': 100}) sampler_config = config.get("sampler", {"N": 100})
sampler = sampler_config.pop('method', 'SALib.sample.morris.sample') sampler = sampler_config.pop("method", "SALib.sample.morris.sample")
sampler = deserializer(sampler) sampler = deserializer(sampler)
bounds = config['vars']['bounds'] bounds = config["vars"]["bounds"]
problem = { problem = {
'num_vars': len(bounds), "num_vars": len(bounds),
'names': list(bounds.keys()), "names": list(bounds.keys()),
'bounds': list(v for v in bounds.values()) "bounds": list(v for v in bounds.values()),
} }
samples = sampler(problem, **sampler_config) samples = sampler(problem, **sampler_config)
lists = config['vars'].get('lists', {}) lists = config["vars"].get("lists", {})
names = list(lists.keys()) names = list(lists.keys())
values = list(lists.values()) values = list(lists.values())
combs = list(product(*values)) combs = list(product(*values))
allnames = names + problem['names'] allnames = names + problem["names"]
allvalues = [(list(i[0])+list(i[1])) for i in product(combs, samples)] allvalues = [(list(i[0]) + list(i[1])) for i in product(combs, samples)]
params = list(map(lambda x: dict(zip(allnames, x)), allvalues)) params = list(map(lambda x: dict(zip(allnames, x)), allvalues))
return params return params
def load_files(*patterns, **kwargs): def load_files(*patterns, **kwargs):
for pattern in patterns: for pattern in patterns:
for i in glob(pattern, **kwargs): for i in glob(pattern, **kwargs, recursive=True):
for config in load_file(i): for cfg in load_file(i):
path = os.path.abspath(i) path = os.path.abspath(i)
if 'dir_path' not in config: yield Config.from_raw(cfg), path
config['dir_path'] = os.path.dirname(path)
yield config, path
def load_config(config): def load_config(cfg):
if isinstance(config, dict): if isinstance(cfg, Config):
yield config, os.getcwd() yield cfg, os.getcwd()
elif isinstance(cfg, dict):
yield Config.from_raw(cfg), os.getcwd()
else: else:
yield from load_files(config) yield from load_files(cfg)
builtins = importlib.import_module('builtins') builtins = importlib.import_module("builtins")
def name(value, known_modules=[]): KNOWN_MODULES = [
'''Return a name that can be imported, to serialize/deserialize an object''' "soil",
]
def name(value, known_modules=KNOWN_MODULES):
"""Return a name that can be imported, to serialize/deserialize an object"""
if value is None: if value is None:
return 'None' return "None"
if not isinstance(value, type): # Get the class name first if not isinstance(value, type): # Get the class name first
value = type(value) value = type(value)
tname = value.__name__ tname = value.__name__
if hasattr(builtins, tname): if hasattr(builtins, tname):
return tname return tname
modname = value.__module__ modname = value.__module__
if modname == '__main__': if modname == "__main__":
return tname return tname
if known_modules and modname in known_modules: if known_modules and modname in known_modules:
return tname return tname
@@ -154,69 +129,100 @@ def name(value, known_modules=[]):
module = importlib.import_module(kmod) module = importlib.import_module(kmod)
if hasattr(module, tname): if hasattr(module, tname):
return tname return tname
return '{}.{}'.format(modname, tname) return "{}.{}".format(modname, tname)
def serializer(type_): def serializer(type_):
if type_ != 'str' and hasattr(builtins, type_): if type_ != "str" and hasattr(builtins, type_):
return repr return repr
return lambda x: x return lambda x: x
def serialize(v, known_modules=[]): def serialize(v, known_modules=KNOWN_MODULES):
'''Get a text representation of an object.''' """Get a text representation of an object."""
tname = name(v, known_modules=known_modules) tname = name(v, known_modules=known_modules)
func = serializer(tname) func = serializer(tname)
return func(v), tname return func(v), tname
def deserializer(type_, known_modules=[]):
def serialize_dict(d, known_modules=KNOWN_MODULES):
d = dict(d)
for (k, v) in d.items():
if isinstance(v, dict):
d[k] = serialize_dict(v, known_modules=known_modules)
elif isinstance(v, list):
for ix in range(len(v)):
v[ix] = serialize_dict(v[ix], known_modules=known_modules)
elif isinstance(v, type):
d[k] = serialize(v, known_modules=known_modules)[1]
return d
IS_CLASS = re.compile(r"<class '(.*)'>")
def deserializer(type_, known_modules=KNOWN_MODULES):
if type(type_) != str: # Already deserialized if type(type_) != str: # Already deserialized
return type_ return type_
if type_ == 'str': if type_ == "str":
return lambda x='': x return lambda x="": x
if type_ == 'None': if type_ == "None":
return lambda x=None: None return lambda x=None: None
if hasattr(builtins, type_): # Check if it's a builtin type if hasattr(builtins, type_): # Check if it's a builtin type
cls = getattr(builtins, type_) cls = getattr(builtins, type_)
return lambda x=None: ast.literal_eval(x) if x is not None else cls() return lambda x=None: ast.literal_eval(x) if x is not None else cls()
match = IS_CLASS.match(type_)
if match:
modname, tname = match.group(1).rsplit(".", 1)
module = importlib.import_module(modname)
cls = getattr(module, tname)
return getattr(cls, "deserialize", cls)
# Otherwise, see if we can find the module and the class # Otherwise, see if we can find the module and the class
modules = known_modules or []
options = [] options = []
for mod in modules: for mod in known_modules:
if mod: if mod:
options.append((mod, type_)) options.append((mod, type_))
if '.' in type_: # Fully qualified module if "." in type_: # Fully qualified module
module, type_ = type_.rsplit(".", 1) module, type_ = type_.rsplit(".", 1)
options.append ((module, type_)) options.append((module, type_))
errors = [] errors = []
for modname, tname in options: for modname, tname in options:
try: try:
module = importlib.import_module(modname) module = importlib.import_module(modname)
cls = getattr(module, tname) cls = getattr(module, tname)
return getattr(cls, 'deserialize', cls) return getattr(cls, "deserialize", cls)
except (ImportError, AttributeError) as ex: except (ImportError, AttributeError) as ex:
errors.append((modname, tname, ex)) errors.append((modname, tname, ex))
raise Exception('Could not find type {}. Tried: {}'.format(type_, errors)) raise ValueError('Could not find type "{}". Tried: {}'.format(type_, errors))
def deserialize(type_, value=None, **kwargs): def deserialize(type_, value=None, globs=None, **kwargs):
'''Get an object from a text representation''' """Get an object from a text representation"""
if not isinstance(type_, str): if not isinstance(type_, str):
return type_ return type_
des = deserializer(type_, **kwargs) if globs and type_ in globs:
des = globs[type_]
else:
try:
des = deserializer(type_, **kwargs)
except ValueError as ex:
try:
des = eval(type_)
except Exception:
raise ex
if value is None: if value is None:
return des return des
return des(value) return des(value)
def deserialize_all(names, *args, known_modules=['soil'], **kwargs): def deserialize_all(names, *args, known_modules=KNOWN_MODULES, **kwargs):
'''Return the set of exporters for a simulation, given the exporter names''' """Return the list of deserialized objects"""
exporters = [] objects = []
for name in names: for name in names:
mod = deserialize(name, known_modules=known_modules) mod = deserialize(name, known_modules=known_modules)
exporters.append(mod(*args, **kwargs)) objects.append(mod(*args, **kwargs))
return exporters return objects

View File

@@ -1,358 +1,268 @@
import os import os
import time from time import time as current_time, strftime
import importlib import importlib
import sys import sys
import yaml import yaml
import traceback import traceback
import inspect
import logging import logging
import networkx as nx import networkx as nx
from networkx.readwrite import json_graph
from multiprocessing import Pool
from functools import partial
from tsih import History
from textwrap import dedent
from dataclasses import dataclass, field, asdict
from typing import Any, Dict, Union, Optional, List
from networkx.readwrite import json_graph
from functools import partial
import pickle import pickle
from . import serialization, utils, basestring, agents from . import serialization, exporters, utils, basestring, agents
from .environment import Environment from .environment import Environment
from .utils import logger from .utils import logger, run_and_return_exceptions
from .exporters import default from .config import Config, convert_old
from .stats import defaultStats
#TODO: change documentation for simulation # TODO: change documentation for simulation
@dataclass
class Simulation: class Simulation:
""" """
Similar to nsim.NetworkSimulation with three main differences:
1) agent type can be specified by name or by class.
2) instead of just one type, a network agents distribution can be used.
The distribution specifies the weight (or probability) of each
agent type in the topology. This is an example distribution: ::
[
{'agent_type': 'agent_type_1',
'weight': 0.2,
'state': {
'id': 0
}
},
{'agent_type': 'agent_type_2',
'weight': 0.8,
'state': {
'id': 1
}
}
]
In this example, 20% of the nodes will be marked as type
'agent_type_1'.
3) if no initial state is given, each node's state will be set
to `{'id': 0}`.
Parameters Parameters
--------- ---------
name : str, optional config (optional): :class:`config.Config`
name of the Simulation name of the Simulation
group : str, optional
a group name can be used to link simulations
topology : networkx.Graph instance, optional
network_params : dict
parameters used to create a topology with networkx, if no topology is given
network_agents : dict
definition of agents to populate the topology with
agent_type : NetworkAgent subclass, optional
Default type of NetworkAgent to use for nodes not specified in network_agents
states : list, optional
List of initial states corresponding to the nodes in the topology. Basic form is a list of integers
whose value indicates the state
dir_path: str, optional
Directory path to load simulation assets (files, modules...)
seed : str, optional
Seed to use for the random generator
num_trials : int, optional
Number of independent simulation runs
max_time : int, optional
Time how long the simulation should run
environment_params : dict, optional
Dictionary of globally-shared environmental parameters
environment_agents: dict, optional
Similar to network_agents. Distribution of Agents that control the environment
environment_class: soil.environment.Environment subclass, optional
Class for the environment. It defailts to soil.environment.Environment
load_module : str, module name, deprecated
If specified, soil will load the content of this module under 'soil.agents.custom'
kwargs: parameters to use to initialize a new configuration, if one not been provided.
""" """
def __init__(self, name=None, group=None, topology=None, network_params=None, version: str = "2"
network_agents=None, agent_type=None, states=None, name: str = "Unnamed simulation"
default_state=None, interval=1, num_trials=1, description: Optional[str] = ""
max_time=100, load_module=None, seed=None, group: str = None
dir_path=None, environment_agents=None, model_class: Union[str, type] = "soil.Environment"
environment_params=None, environment_class=None, model_params: dict = field(default_factory=dict)
**kwargs): seed: str = field(default_factory=lambda: current_time())
dir_path: str = field(default_factory=lambda: os.getcwd())
max_time: float = float("inf")
max_steps: int = -1
interval: int = 1
num_trials: int = 1
parallel: Optional[bool] = None
exporters: Optional[List[str]] = field(default_factory=list)
outdir: Optional[str] = None
exporter_params: Optional[Dict[str, Any]] = field(default_factory=dict)
dry_run: bool = False
extra: Dict[str, Any] = field(default_factory=dict)
self.load_module = load_module @classmethod
self.network_params = network_params def from_dict(cls, env, **kwargs):
self.name = name or 'Unnamed'
self.seed = str(seed or name)
self._id = '{}_{}'.format(self.name, time.strftime("%Y-%m-%d_%H.%M.%S"))
self.group = group or ''
self.num_trials = num_trials
self.max_time = max_time
self.default_state = default_state or {}
self.dir_path = dir_path or os.getcwd()
self.interval = interval
sys.path += list(x for x in [os.getcwd(), self.dir_path] if x not in sys.path) ignored = {
k: v for k, v in env.items() if k not in inspect.signature(cls).parameters
}
if topology is None: d = {k: v for k, v in env.items() if k not in ignored}
topology = serialization.load_network(network_params, if ignored:
dir_path=self.dir_path) d.setdefault("extra", {}).update(ignored)
elif isinstance(topology, basestring) or isinstance(topology, dict): if ignored:
topology = json_graph.node_link_graph(topology) logger.warning(f'Ignoring these parameters (added to "extra"): { ignored }')
self.topology = nx.Graph(topology) d.update(kwargs)
return cls(**d)
self.environment_params = environment_params or {}
self.environment_class = serialization.deserialize(environment_class,
known_modules=['soil.environment', ]) or Environment
environment_agents = environment_agents or []
self.environment_agents = agents._convert_agent_types(environment_agents,
known_modules=[self.load_module])
distro = agents.calculate_distribution(network_agents,
agent_type)
self.network_agents = agents._convert_agent_types(distro,
known_modules=[self.load_module])
self.states = agents._validate_states(states,
self.topology)
self._history = History(name=self.name,
backup=False)
def run_simulation(self, *args, **kwargs): def run_simulation(self, *args, **kwargs):
return self.run(*args, **kwargs) return self.run(*args, **kwargs)
def run(self, *args, **kwargs): def run(self, *args, **kwargs):
'''Run the simulation and return the list of resulting environments''' """Run the simulation and return the list of resulting environments"""
logger.info(
dedent(
"""
Simulation:
---
"""
)
+ self.to_yaml()
)
return list(self.run_gen(*args, **kwargs)) return list(self.run_gen(*args, **kwargs))
def _run_sync_or_async(self, parallel=False, *args, **kwargs): def run_gen(
if parallel and not os.environ.get('SENPY_DEBUG', None): self,
p = Pool() parallel=False,
func = partial(self.run_trial_exceptions, dry_run=None,
*args, exporters=None,
**kwargs) outdir=None,
for i in p.imap_unordered(func, range(self.num_trials)): exporter_params={},
if isinstance(i, Exception): log_level=None,
logger.error('Trial failed:\n\t%s', i.message) **kwargs,
continue ):
yield i """Run the simulation and yield the resulting environments."""
else:
for i in range(self.num_trials):
yield self.run_trial(*args,
**kwargs)
def run_gen(self, *args, parallel=False, dry_run=False,
exporters=[default, ], stats=[], outdir=None, exporter_params={},
stats_params={}, log_level=None,
**kwargs):
'''Run the simulation and yield the resulting environments.'''
if log_level: if log_level:
logger.setLevel(log_level) logger.setLevel(log_level)
logger.info('Using exporters: %s', exporters or []) outdir = outdir or self.outdir
logger.info('Output directory: %s', outdir) logger.info("Using exporters: %s", exporters or [])
exporters = serialization.deserialize_all(exporters, logger.info("Output directory: %s", outdir)
simulation=self, if dry_run is None:
known_modules=['soil.exporters',], dry_run = self.dry_run
dry_run=dry_run, if exporters is None:
outdir=outdir, exporters = self.exporters
**exporter_params) if not exporter_params:
stats = serialization.deserialize_all(simulation=self, exporter_params = self.exporter_params
names=stats,
known_modules=['soil.stats',],
**stats_params)
with utils.timer('simulation {}'.format(self.name)): exporters = serialization.deserialize_all(
for stat in stats: exporters,
stat.start() simulation=self,
known_modules=[
"soil.exporters",
],
dry_run=dry_run,
outdir=outdir,
**exporter_params,
)
with utils.timer("simulation {}".format(self.name)):
for exporter in exporters: for exporter in exporters:
exporter.start() exporter.sim_start()
for env in self._run_sync_or_async(*args,
parallel=parallel,
log_level=log_level,
**kwargs):
collected = list(stat.trial(env) for stat in stats) for env in utils.run_parallel(
func=self.run_trial,
saved = self.save_stats(collected, t_step=env.now, trial_id=env.name) iterable=range(int(self.num_trials)),
parallel=parallel,
log_level=log_level,
**kwargs,
):
for exporter in exporters: for exporter in exporters:
exporter.trial(env, saved) exporter.trial_start(env)
for exporter in exporters:
exporter.trial_end(env)
yield env yield env
collected = list(stat.end() for stat in stats)
saved = self.save_stats(collected)
for exporter in exporters: for exporter in exporters:
exporter.end(saved) exporter.sim_end()
def get_env(self, trial_id=0, model_params=None, **kwargs):
"""Create an environment for a trial of the simulation"""
def save_stats(self, collection, **kwargs): def deserialize_reporters(reporters):
stats = dict(kwargs) for (k, v) in reporters.items():
for stat in collection: if isinstance(v, str) and v.startswith("py:"):
stats.update(stat) reporters[k] = serialization.deserialize(v.split(":", 1)[1])
self._history.save_stats(utils.flatten_dict(stats)) return reporters
return stats
def get_stats(self, **kwargs): params = self.model_params.copy()
return self._history.get_stats(**kwargs) if model_params:
params.update(model_params)
params.update(kwargs)
def log_stats(self, stats): agent_reporters = deserialize_reporters(params.pop("agent_reporters", {}))
logger.info('Stats: \n{}'.format(yaml.dump(stats, default_flow_style=False))) model_reporters = deserialize_reporters(params.pop("model_reporters", {}))
def get_env(self, trial_id=0, **kwargs): env = serialization.deserialize(self.model_class)
'''Create an environment for a trial of the simulation''' return env(
opts = self.environment_params.copy() id=f"{self.name}_trial_{trial_id}",
opts.update({ seed=f"{self.seed}_trial_{trial_id}",
'name': trial_id, dir_path=self.dir_path,
'topology': self.topology.copy(), agent_reporters=agent_reporters,
'network_params': self.network_params, model_reporters=model_reporters,
'seed': '{}_trial_{}'.format(self.seed, trial_id), **params,
'initial_time': 0, )
'interval': self.interval,
'network_agents': self.network_agents,
'initial_time': 0,
'states': self.states,
'dir_path': self.dir_path,
'default_state': self.default_state,
'environment_agents': self.environment_agents,
})
opts.update(kwargs)
env = self.environment_class(**opts)
return env
def run_trial(self, until=None, log_level=logging.INFO, **opts): def run_trial(
self, trial_id=None, until=None, log_file=False, log_level=logging.INFO, **opts
):
""" """
Run a single trial of the simulation Run a single trial of the simulation
""" """
trial_id = '{}_trial_{}'.format(self.name, time.time()).replace('.', '-')
if log_level: if log_level:
logger.setLevel(log_level) logger.setLevel(log_level)
# Set-up trial environment and graph model = self.get_env(trial_id, **opts)
until = until or self.max_time trial_id = trial_id if trial_id is not None else current_time()
env = self.get_env(trial_id=trial_id, **opts) with utils.timer("Simulation {} trial {}".format(self.name, trial_id)):
# Set up agents on nodes return self.run_model(
with utils.timer('Simulation {} trial {}'.format(self.name, trial_id)): model=model, trial_id=trial_id, until=until, log_level=log_level
env.run(until) )
return env
def run_trial_exceptions(self, *args, **kwargs): def run_model(self, model, until=None, **opts):
''' # Set-up trial environment and graph
A wrapper for run_trial that catches exceptions and returns them. until = float(until or self.max_time or "inf")
It is meant for async simulations
''' # Set up agents on nodes
try: def is_done():
return self.run_trial(*args, **kwargs) return not model.running
except Exception as ex:
if ex.__cause__ is not None: if until and hasattr(model.schedule, "time"):
ex = ex.__cause__ prev = is_done
ex.message = ''.join(traceback.format_exception(type(ex), ex, ex.__traceback__)[:])
return ex def is_done():
return prev() or model.schedule.time >= until
if self.max_steps and self.max_steps > 0 and hasattr(model.schedule, "steps"):
prev_steps = is_done
def is_done():
return prev_steps() or model.schedule.steps >= self.max_steps
newline = "\n"
logger.info(
dedent(
f"""
Model stats:
Agents (total: { model.schedule.get_agent_count() }):
- { (newline + ' - ').join(str(a) for a in model.schedule.agents) }
Topology size: { len(model.G) if hasattr(model, "G") else 0 }
"""
)
)
while not is_done():
utils.logger.debug(
f'Simulation time {model.schedule.time}/{until}. Next: {getattr(model.schedule, "next_time", model.schedule.time + self.interval)}'
)
model.step()
if (
model.schedule.time < until
): # Simulation ended (no more steps) before the expected time
model.schedule.time = until
return model
def to_dict(self): def to_dict(self):
return self.__getstate__() d = asdict(self)
if not isinstance(d["model_class"], str):
d["model_class"] = serialization.name(d["model_class"])
d["model_params"] = serialization.serialize_dict(d["model_params"])
d["dir_path"] = str(d["dir_path"])
d["version"] = "2"
return d
def to_yaml(self): def to_yaml(self):
return yaml.dump(self.to_dict()) return yaml.dump(self.to_dict())
def dump_yaml(self, f=None, outdir=None): def iter_from_config(*cfgs, **kwargs):
if not f and not outdir: for config in cfgs:
raise ValueError('specify a file or an output directory') configs = list(serialization.load_config(config))
for config, path in configs:
if not f: d = dict(config)
f = os.path.join(outdir, '{}.dumped.yml'.format(self.name)) if "dir_path" not in d:
d["dir_path"] = os.path.dirname(path)
with utils.open_or_reuse(f, 'w') as f: yield Simulation.from_dict(d, **kwargs)
f.write(self.to_yaml())
def dump_pickle(self, f=None, outdir=None):
if not outdir and not f:
raise ValueError('specify a file or an output directory')
if not f:
f = os.path.join(outdir,
'{}.simulation.pickle'.format(self.name))
with utils.open_or_reuse(f, 'wb') as f:
pickle.dump(self, f)
def dump_sqlite(self, f):
return self._history.dump(f)
def __getstate__(self):
state={}
for k, v in self.__dict__.items():
if k[0] != '_':
state[k] = v
state['topology'] = json_graph.node_link_data(self.topology)
state['network_agents'] = agents.serialize_definition(self.network_agents,
known_modules = [])
state['environment_agents'] = agents.serialize_definition(self.environment_agents,
known_modules = [])
state['environment_class'] = serialization.serialize(self.environment_class,
known_modules=['soil.environment'])[1] # func, name
if state['load_module'] is None:
del state['load_module']
return state
def __setstate__(self, state):
self.__dict__ = state
self.load_module = getattr(self, 'load_module', None)
if self.dir_path not in sys.path:
sys.path += [self.dir_path, os.getcwd()]
self.topology = json_graph.node_link_graph(state['topology'])
self.network_agents = agents.calculate_distribution(agents._convert_agent_types(self.network_agents))
self.environment_agents = agents._convert_agent_types(self.environment_agents,
known_modules=[self.load_module])
self.environment_class = serialization.deserialize(self.environment_class,
known_modules=[self.load_module, 'soil.environment', ]) # func, name
def all_from_config(config):
configs = list(serialization.load_config(config))
for config, _ in configs:
sim = Simulation(**config)
yield sim
def from_config(conf_or_path): def from_config(conf_or_path):
config = list(serialization.load_config(conf_or_path)) lst = list(iter_from_config(conf_or_path))
if len(config) > 1: if len(lst) > 1:
raise AttributeError('Provide only one configuration') raise AttributeError("Provide only one configuration")
config = config[0][0] return lst[0]
sim = Simulation(**config)
return sim
def run_from_config(*configs, **kwargs): def run_from_config(*configs, **kwargs):
for config_def in configs: for sim in iter_from_config(*configs):
# logger.info("Found {} config(s)".format(len(ls))) logger.info(f"Using config(s): {sim.name}")
for config, path in serialization.load_config(config_def): sim.run_simulation(**kwargs)
name = config.get('name', 'unnamed')
logger.info("Using config(s): {name}".format(name=name))
dir_path = config.pop('dir_path', os.path.dirname(path))
sim = Simulation(dir_path=dir_path,
**config)
sim.run_simulation(**kwargs)

View File

@@ -1,106 +0,0 @@
import pandas as pd
from collections import Counter
class Stats:
'''
Interface for all stats. It is not necessary, but it is useful
if you don't plan to implement all the methods.
'''
def __init__(self, simulation):
self.simulation = simulation
def start(self):
'''Method to call when the simulation starts'''
pass
def end(self):
'''Method to call when the simulation ends'''
return {}
def trial(self, env):
'''Method to call when a trial ends'''
return {}
class distribution(Stats):
'''
Calculate the distribution of agent states at the end of each trial,
the mean value, and its deviation.
'''
def start(self):
self.means = []
self.counts = []
def trial(self, env):
df = env[None, None, None].df()
df = df.drop('SEED', axis=1)
ix = df.index[-1]
attrs = df.columns.get_level_values(0)
vc = {}
stats = {
'mean': {},
'count': {},
}
for a in attrs:
t = df.loc[(ix, a)]
try:
stats['mean'][a] = t.mean()
self.means.append(('mean', a, t.mean()))
except TypeError:
pass
for name, count in t.value_counts().iteritems():
if a not in stats['count']:
stats['count'][a] = {}
stats['count'][a][name] = count
self.counts.append(('count', a, name, count))
return stats
def end(self):
dfm = pd.DataFrame(self.means, columns=['metric', 'key', 'value'])
dfc = pd.DataFrame(self.counts, columns=['metric', 'key', 'value', 'count'])
count = {}
mean = {}
if self.means:
res = dfm.groupby(by=['key']).agg(['mean', 'std', 'count', 'median', 'max', 'min'])
mean = res['value'].to_dict()
if self.counts:
res = dfc.groupby(by=['key', 'value']).agg(['mean', 'std', 'count', 'median', 'max', 'min'])
for k,v in res['count'].to_dict().items():
if k not in count:
count[k] = {}
for tup, times in v.items():
subkey, subcount = tup
if subkey not in count[k]:
count[k][subkey] = {}
count[k][subkey][subcount] = times
return {'count': count, 'mean': mean}
class defaultStats(Stats):
def trial(self, env):
c = Counter()
c.update(a.__class__.__name__ for a in env.network_agents)
c2 = Counter()
c2.update(a['id'] for a in env.network_agents)
return {
'network ': {
'n_nodes': env.G.number_of_nodes(),
'n_edges': env.G.number_of_edges(),
},
'agents': {
'model_count': dict(c),
'state_count': dict(c2),
}
}

View File

@@ -2,27 +2,89 @@ from mesa.time import BaseScheduler
from queue import Empty from queue import Empty
from heapq import heappush, heappop from heapq import heappush, heappop
import math import math
from inspect import getsource
from numbers import Number
from textwrap import dedent
from .utils import logger from .utils import logger
from mesa import Agent from mesa import Agent as MesaAgent
INFINITY = float("inf")
class DeadAgent(Exception):
pass
class When: class When:
def __init__(self, time): def __init__(self, time):
self._time = float(time) if isinstance(time, When):
return time
self._time = time
def abs(self, time): def abs(self, time):
return self._time return self._time
def schedule_next(self, time, delta, first=False):
return (self._time, None)
class Delta:
NEVER = When(INFINITY)
class Delta(When):
def __init__(self, delta): def __init__(self, delta):
self._delta = delta self._delta = delta
def __eq__(self, other):
return self._delta == other._delta
def abs(self, time): def abs(self, time):
return time + self._delta return self._time + self._delta
def __eq__(self, other):
if isinstance(other, Delta):
return self._delta == other._delta
return False
def schedule_next(self, time, delta, first=False):
return (time + self._delta, None)
def __repr__(self):
return str(f"Delta({self._delta})")
class BaseCond:
def __init__(self, msg=None, delta=None, eager=False):
self._msg = msg
self._delta = delta
self.eager = eager
def schedule_next(self, time, delta, first=False):
if first and self.eager:
return (time, self)
if self._delta:
delta = self._delta
return (time + delta, self)
def return_value(self, agent):
return None
def __repr__(self):
return self._msg or self.__class__.__name__
class Cond(BaseCond):
def __init__(self, func, *args, **kwargs):
self._func = func
super().__init__(*args, **kwargs)
def ready(self, agent, time):
return self._func(agent)
def __repr__(self):
if self._msg:
return self._msg
return str(f'Cond("{dedent(getsource(self._func)).strip()}")')
class TimedActivation(BaseScheduler): class TimedActivation(BaseScheduler):
@@ -30,58 +92,115 @@ class TimedActivation(BaseScheduler):
In each activation, each agent will update its 'next_time'. In each activation, each agent will update its 'next_time'.
""" """
def __init__(self, *args, **kwargs): def __init__(self, *args, shuffle=True, **kwargs):
super().__init__(self) super().__init__(*args, **kwargs)
self._next = {}
self._queue = [] self._queue = []
self.next_time = 0 self._shuffle = shuffle
self.step_interval = 1
self.logger = logger.getChild(f"time_{ self.model }")
def add(self, agent: Agent): def add(self, agent: MesaAgent, when=None):
if agent.unique_id not in self._agents: if when is None:
heappush(self._queue, (self.time, agent.unique_id)) when = self.time
super().add(agent) elif isinstance(when, When):
when = when.abs()
def step(self, until: float =float('inf')) -> None: self._schedule(agent, None, when)
super().add(agent)
def _schedule(self, agent, condition=None, when=None):
if condition:
if not when:
when, condition = condition.schedule_next(
when or self.time, self.step_interval
)
else:
if when is None:
when = self.time + self.step_interval
condition = None
if self._shuffle:
key = (when, self.model.random.random(), condition)
else:
key = (when, agent.unique_id, condition)
self._next[agent.unique_id] = key
heappush(self._queue, (key, agent))
def step(self) -> None:
""" """
Executes agents in order, one at a time. After each step, Executes agents in order, one at a time. After each step,
an agent will signal when it wants to be scheduled next. an agent will signal when it wants to be scheduled next.
""" """
when = None self.logger.debug(f"Simulation step {self.time}")
agent_id = None if not self.model.running:
unsched = [] return
until = until or float('inf')
self.logger.debug(f"Queue length: {len(self._queue)}")
while self._queue:
((when, _id, cond), agent) = self._queue[0]
if when > self.time:
break
heappop(self._queue)
if cond:
if not cond.ready(agent, self.time):
self._schedule(agent, cond)
continue
try:
agent._last_return = cond.return_value(agent)
except Exception as ex:
agent._last_except = ex
else:
agent._last_return = None
agent._last_except = None
self.logger.debug(f"Stepping agent {agent}")
self._next.pop(agent.unique_id, None)
try:
returned = agent.step()
except DeadAgent:
agent.alive = False
continue
# Check status for MESA agents
if not getattr(agent, "alive", True):
continue
if returned:
next_check = returned.schedule_next(
self.time, self.step_interval, first=True
)
self._schedule(agent, when=next_check[0], condition=next_check[1])
else:
next_check = (self.time + self.step_interval, None)
self._schedule(agent)
self.steps += 1
if not self._queue: if not self._queue:
self.time = until self.time = INFINITY
self.next_time = float('inf') self.model.running = False
return return self.time
(when, agent_id) = self._queue[0] next_time = self._queue[0][0][0]
if next_time < self.time:
raise Exception(
f"An agent has been scheduled for a time in the past, there is probably an error ({when} < {self.time})"
)
self.logger.debug(f"Updating time step: {self.time} -> {next_time}")
if until and when > until: self.time = next_time
self.time = until
self.next_time = when
return
self.time = when
next_time = float("inf")
while when == self.time: class ShuffledTimedActivation(TimedActivation):
heappop(self._queue) def __init__(self, *args, **kwargs):
logger.debug(f'Stepping agent {agent_id}') super().__init__(*args, shuffle=True, **kwargs)
when = (self._agents[agent_id].step() or Delta(1)).abs(self.time)
heappush(self._queue, (when, agent_id))
if when < next_time:
next_time = when
if not self._queue or self._queue[0][0] > self.time:
agent_id = None
break
else:
(when, agent_id) = self._queue[0]
if when and when < self.time: class OrderedTimedActivation(TimedActivation):
raise Exception("Invalid scheduling time") def __init__(self, *args, **kwargs):
super().__init__(*args, shuffle=False, **kwargs)
self.next_time = next_time
self.steps += 1

View File

@@ -1,71 +1,106 @@
import logging import logging
import time from time import time as current_time, strftime, gmtime, localtime
import os import os
import traceback
from shutil import copyfile from functools import partial
from shutil import copyfile, move
from multiprocessing import Pool
from contextlib import contextmanager from contextlib import contextmanager
logger = logging.getLogger('soil') logger = logging.getLogger("soil")
# logging.basicConfig() logger.setLevel(logging.INFO)
# logger.setLevel(logging.INFO)
timeformat = "%H:%M:%S"
if os.environ.get("SOIL_VERBOSE", ""):
logformat = "[%(levelname)-5.5s][%(asctime)s][%(name)s]: %(message)s"
else:
logformat = "[%(levelname)-5.5s][%(asctime)s] %(message)s"
logFormatter = logging.Formatter(logformat, timeformat)
consoleHandler = logging.StreamHandler()
consoleHandler.setFormatter(logFormatter)
logging.basicConfig(
level=logging.INFO,
handlers=[
consoleHandler,
],
)
@contextmanager @contextmanager
def timer(name='task', pre="", function=logger.info, to_object=None): def timer(name="task", pre="", function=logger.info, to_object=None):
start = time.time() start = current_time()
function('{}Starting {} at {}.'.format(pre, name, function("{}Starting {} at {}.".format(pre, name, strftime("%X", gmtime(start))))
time.strftime("%X", time.gmtime(start))))
yield start yield start
end = time.time() end = current_time()
function('{}Finished {} at {} in {} seconds'.format(pre, name, function(
time.strftime("%X", time.gmtime(end)), "{}Finished {} at {} in {} seconds".format(
str(end-start))) pre, name, strftime("%X", gmtime(end)), str(end - start)
)
)
if to_object: if to_object:
to_object.start = start to_object.start = start
to_object.end = end to_object.end = end
def try_backup(path, remove=False):
if not os.path.exists(path):
def safe_open(path, mode='r', backup=True, **kwargs): return None
outdir = os.path.dirname(path) outdir = os.path.dirname(path)
if outdir and not os.path.exists(outdir): if outdir and not os.path.exists(outdir):
os.makedirs(outdir) os.makedirs(outdir)
if backup and 'w' in mode and os.path.exists(path): creation = os.path.getctime(path)
creation = os.path.getctime(path) stamp = strftime("%Y-%m-%d_%H.%M.%S", localtime(creation))
stamp = time.strftime('%Y-%m-%d_%H.%M.%S', time.localtime(creation))
backup_dir = os.path.join(outdir, 'backup') backup_dir = os.path.join(outdir, "backup")
if not os.path.exists(backup_dir): if not os.path.exists(backup_dir):
os.makedirs(backup_dir) os.makedirs(backup_dir)
newpath = os.path.join(backup_dir, '{}@{}'.format(os.path.basename(path), newpath = os.path.join(backup_dir, "{}@{}".format(os.path.basename(path), stamp))
stamp)) if move:
move(path, newpath)
else:
copyfile(path, newpath) copyfile(path, newpath)
return newpath
def safe_open(path, mode="r", backup=True, **kwargs):
outdir = os.path.dirname(path)
if outdir and not os.path.exists(outdir):
os.makedirs(outdir)
if backup and "w" in mode:
try_backup(path)
return open(path, mode=mode, **kwargs) return open(path, mode=mode, **kwargs)
@contextmanager
def open_or_reuse(f, *args, **kwargs): def open_or_reuse(f, *args, **kwargs):
try: try:
return safe_open(f, *args, **kwargs) with safe_open(f, *args, **kwargs) as f:
except (AttributeError, TypeError): yield f
return f except (AttributeError, TypeError) as ex:
yield f
def flatten_dict(d): def flatten_dict(d):
if not isinstance(d, dict): if not isinstance(d, dict):
return d return d
return dict(_flatten_dict(d)) return dict(_flatten_dict(d))
def _flatten_dict(d, prefix=''):
def _flatten_dict(d, prefix=""):
if not isinstance(d, dict): if not isinstance(d, dict):
# print('END:', prefix, d) # print('END:', prefix, d)
yield prefix, d yield prefix, d
return return
if prefix: if prefix:
prefix = prefix + '.' prefix = prefix + "."
for k, v in d.items(): for k, v in d.items():
# print(k, v) # print(k, v)
res = list(_flatten_dict(v, prefix='{}{}'.format(prefix, k))) res = list(_flatten_dict(v, prefix="{}{}".format(prefix, k)))
# print('RES:', res) # print('RES:', res)
yield from res yield from res
@@ -77,7 +112,7 @@ def unflatten_dict(d):
if not isinstance(k, str): if not isinstance(k, str):
target[k] = v target[k] = v
continue continue
tokens = k.split('.') tokens = k.split(".")
if len(tokens) < 2: if len(tokens) < 2:
target[k] = v target[k] = v
continue continue
@@ -87,3 +122,33 @@ def unflatten_dict(d):
target = target[token] target = target[token]
target[tokens[-1]] = v target[tokens[-1]] = v
return out return out
def run_and_return_exceptions(func, *args, **kwargs):
"""
A wrapper for run_trial that catches exceptions and returns them.
It is meant for async simulations.
"""
try:
return func(*args, **kwargs)
except Exception as ex:
if ex.__cause__ is not None:
ex = ex.__cause__
ex.message = "".join(
traceback.format_exception(type(ex), ex, ex.__traceback__)[:]
)
return ex
def run_parallel(func, iterable, parallel=False, **kwargs):
if parallel and not os.environ.get("SOIL_DEBUG", None):
p = Pool()
wrapped_func = partial(run_and_return_exceptions, func, **kwargs)
for i in p.imap_unordered(wrapped_func, iterable):
if isinstance(i, Exception):
logger.error("Trial failed:\n\t%s", i.message)
continue
yield i
else:
for i in iterable:
yield func(i, **kwargs)

View File

@@ -4,7 +4,7 @@ import logging
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
ROOT = os.path.dirname(__file__) ROOT = os.path.dirname(__file__)
DEFAULT_FILE = os.path.join(ROOT, 'VERSION') DEFAULT_FILE = os.path.join(ROOT, "VERSION")
def read_version(versionfile=DEFAULT_FILE): def read_version(versionfile=DEFAULT_FILE):
@@ -12,9 +12,10 @@ def read_version(versionfile=DEFAULT_FILE):
with open(versionfile) as f: with open(versionfile) as f:
return f.read().strip() return f.read().strip()
except IOError: # pragma: no cover except IOError: # pragma: no cover
logger.error(('Running an unknown version of {}.' logger.error(
'Be careful!.').format(__name__)) ("Running an unknown version of {}." "Be careful!.").format(__name__)
return '0.0' )
return "0.0"
__version__ = read_version() __version__ = read_version()

View File

@@ -1,5 +1,6 @@
from mesa.visualization.UserParam import UserSettableParameter from mesa.visualization.UserParam import UserSettableParameter
class UserSettableParameter(UserSettableParameter): class UserSettableParameter(UserSettableParameter):
def __str__(self): def __str__(self):
return self.value return self.value

View File

@@ -20,6 +20,7 @@ from tornado.concurrent import run_on_executor
from concurrent.futures import ThreadPoolExecutor from concurrent.futures import ThreadPoolExecutor
from ..simulation import Simulation from ..simulation import Simulation
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
logger.setLevel(logging.INFO) logger.setLevel(logging.INFO)
@@ -31,140 +32,183 @@ LOGGING_INTERVAL = 0.5
# Workaround to let Soil load the required modules # Workaround to let Soil load the required modules
sys.path.append(ROOT) sys.path.append(ROOT)
class PageHandler(tornado.web.RequestHandler): class PageHandler(tornado.web.RequestHandler):
""" Handler for the HTML template which holds the visualization. """ """Handler for the HTML template which holds the visualization."""
def get(self): def get(self):
self.render('index.html', port=self.application.port, self.render(
name=self.application.name) "index.html", port=self.application.port, name=self.application.name
)
class SocketHandler(tornado.websocket.WebSocketHandler): class SocketHandler(tornado.websocket.WebSocketHandler):
""" Handler for websocket. """ """Handler for websocket."""
executor = ThreadPoolExecutor(max_workers=MAX_WORKERS) executor = ThreadPoolExecutor(max_workers=MAX_WORKERS)
def open(self): def open(self):
if self.application.verbose: if self.application.verbose:
logger.info('Socket opened!') logger.info("Socket opened!")
def check_origin(self, origin): def check_origin(self, origin):
return True return True
def on_message(self, message): def on_message(self, message):
""" Receiving a message from the websocket, parse, and act accordingly. """ """Receiving a message from the websocket, parse, and act accordingly."""
msg = tornado.escape.json_decode(message) msg = tornado.escape.json_decode(message)
if msg['type'] == 'config_file': if msg["type"] == "config_file":
if self.application.verbose: if self.application.verbose:
print(msg['data']) print(msg["data"])
self.config = list(yaml.load_all(msg['data'])) self.config = list(yaml.load_all(msg["data"]))
if len(self.config) > 1: if len(self.config) > 1:
error = 'Please, provide only one configuration.' error = "Please, provide only one configuration."
if self.application.verbose: if self.application.verbose:
logger.error(error) logger.error(error)
self.write_message({'type': 'error', self.write_message({"type": "error", "error": error})
'error': error})
return return
self.config = self.config[0] self.config = self.config[0]
self.send_log('INFO.' + self.simulation_name, self.send_log(
'Using config: {name}'.format(name=self.config['name'])) "INFO." + self.simulation_name,
"Using config: {name}".format(name=self.config["name"]),
)
if 'visualization_params' in self.config: if "visualization_params" in self.config:
self.write_message({'type': 'visualization_params', self.write_message(
'data': self.config['visualization_params']}) {
self.name = self.config['name'] "type": "visualization_params",
"data": self.config["visualization_params"],
}
)
self.name = self.config["name"]
self.run_simulation() self.run_simulation()
settings = [] settings = []
for key in self.config['environment_params']: for key in self.config["environment_params"]:
if type(self.config['environment_params'][key]) == float or type(self.config['environment_params'][key]) == int: if (
if self.config['environment_params'][key] <= 1: type(self.config["environment_params"][key]) == float
setting_type = 'number' or type(self.config["environment_params"][key]) == int
):
if self.config["environment_params"][key] <= 1:
setting_type = "number"
else: else:
setting_type = 'great_number' setting_type = "great_number"
elif type(self.config['environment_params'][key]) == bool: elif type(self.config["environment_params"][key]) == bool:
setting_type = 'boolean' setting_type = "boolean"
else: else:
setting_type = 'undefined' setting_type = "undefined"
settings.append({ settings.append(
'label': key, {
'type': setting_type, "label": key,
'value': self.config['environment_params'][key] "type": setting_type,
}) "value": self.config["environment_params"][key],
}
)
self.write_message({'type': 'settings', self.write_message({"type": "settings", "data": settings})
'data': settings})
elif msg['type'] == 'get_trial': elif msg["type"] == "get_trial":
if self.application.verbose: if self.application.verbose:
logger.info('Trial {} requested!'.format(msg['data'])) logger.info("Trial {} requested!".format(msg["data"]))
self.send_log('INFO.' + __name__, 'Trial {} requested!'.format(msg['data'])) self.send_log("INFO." + __name__, "Trial {} requested!".format(msg["data"]))
self.write_message({'type': 'get_trial', self.write_message(
'data': self.get_trial(int(msg['data']))}) {"type": "get_trial", "data": self.get_trial(int(msg["data"]))}
)
elif msg['type'] == 'run_simulation': elif msg["type"] == "run_simulation":
if self.application.verbose: if self.application.verbose:
logger.info('Running new simulation for {name}'.format(name=self.config['name'])) logger.info(
self.send_log('INFO.' + self.simulation_name, 'Running new simulation for {name}'.format(name=self.config['name'])) "Running new simulation for {name}".format(name=self.config["name"])
self.config['environment_params'] = msg['data'] )
self.send_log(
"INFO." + self.simulation_name,
"Running new simulation for {name}".format(name=self.config["name"]),
)
self.config["environment_params"] = msg["data"]
self.run_simulation() self.run_simulation()
elif msg['type'] == 'download_gexf': elif msg["type"] == "download_gexf":
G = self.trials[ int(msg['data']) ].history_to_graph() G = self.trials[int(msg["data"])].history_to_graph()
for node in G.nodes(): for node in G.nodes():
if 'pos' in G.nodes[node]: if "pos" in G.nodes[node]:
G.nodes[node]['viz'] = {"position": {"x": G.nodes[node]['pos'][0], "y": G.nodes[node]['pos'][1], "z": 0.0}} G.nodes[node]["viz"] = {
del (G.nodes[node]['pos']) "position": {
writer = nx.readwrite.gexf.GEXFWriter(version='1.2draft') "x": G.nodes[node]["pos"][0],
"y": G.nodes[node]["pos"][1],
"z": 0.0,
}
}
del G.nodes[node]["pos"]
writer = nx.readwrite.gexf.GEXFWriter(version="1.2draft")
writer.add_graph(G) writer.add_graph(G)
self.write_message({'type': 'download_gexf', self.write_message(
'filename': self.config['name'] + '_trial_' + str(msg['data']), {
'data': tostring(writer.xml).decode(writer.encoding) }) "type": "download_gexf",
"filename": self.config["name"] + "_trial_" + str(msg["data"]),
"data": tostring(writer.xml).decode(writer.encoding),
}
)
elif msg['type'] == 'download_json': elif msg["type"] == "download_json":
G = self.trials[ int(msg['data']) ].history_to_graph() G = self.trials[int(msg["data"])].history_to_graph()
for node in G.nodes(): for node in G.nodes():
if 'pos' in G.nodes[node]: if "pos" in G.nodes[node]:
G.nodes[node]['viz'] = {"position": {"x": G.nodes[node]['pos'][0], "y": G.nodes[node]['pos'][1], "z": 0.0}} G.nodes[node]["viz"] = {
del (G.nodes[node]['pos']) "position": {
self.write_message({'type': 'download_json', "x": G.nodes[node]["pos"][0],
'filename': self.config['name'] + '_trial_' + str(msg['data']), "y": G.nodes[node]["pos"][1],
'data': nx.node_link_data(G) }) "z": 0.0,
}
}
del G.nodes[node]["pos"]
self.write_message(
{
"type": "download_json",
"filename": self.config["name"] + "_trial_" + str(msg["data"]),
"data": nx.node_link_data(G),
}
)
else: else:
if self.application.verbose: if self.application.verbose:
logger.info('Unexpected message!') logger.info("Unexpected message!")
def update_logging(self): def update_logging(self):
try: try:
if (not self.log_capture_string.closed and self.log_capture_string.getvalue()): if (
for i in range(len(self.log_capture_string.getvalue().split('\n')) - 1): not self.log_capture_string.closed
self.send_log('INFO.' + self.simulation_name, self.log_capture_string.getvalue().split('\n')[i]) and self.log_capture_string.getvalue()
):
for i in range(len(self.log_capture_string.getvalue().split("\n")) - 1):
self.send_log(
"INFO." + self.simulation_name,
self.log_capture_string.getvalue().split("\n")[i],
)
self.log_capture_string.truncate(0) self.log_capture_string.truncate(0)
self.log_capture_string.seek(0) self.log_capture_string.seek(0)
finally: finally:
if self.capture_logging: if self.capture_logging:
tornado.ioloop.IOLoop.current().call_later(LOGGING_INTERVAL, self.update_logging) tornado.ioloop.IOLoop.current().call_later(
LOGGING_INTERVAL, self.update_logging
)
def on_close(self): def on_close(self):
if self.application.verbose: if self.application.verbose:
logger.info('Socket closed!') logger.info("Socket closed!")
def send_log(self, logger, logging): def send_log(self, logger, logging):
self.write_message({'type': 'log', self.write_message({"type": "log", "logger": logger, "logging": logging})
'logger': logger,
'logging': logging})
@property @property
def simulation_name(self): def simulation_name(self):
return self.config.get('name', 'NoSimulationRunning') return self.config.get("name", "NoSimulationRunning")
@run_on_executor @run_on_executor
def nonblocking(self, config): def nonblocking(self, config):
@@ -174,28 +218,31 @@ class SocketHandler(tornado.websocket.WebSocketHandler):
@tornado.gen.coroutine @tornado.gen.coroutine
def run_simulation(self): def run_simulation(self):
# Run simulation and capture logs # Run simulation and capture logs
logger.info('Running simulation!') logger.info("Running simulation!")
if 'visualization_params' in self.config: if "visualization_params" in self.config:
del self.config['visualization_params'] del self.config["visualization_params"]
with self.logging(self.simulation_name): with self.logging(self.simulation_name):
try: try:
config = dict(**self.config) config = dict(**self.config)
config['outdir'] = os.path.join(self.application.outdir, config['name']) config["outdir"] = os.path.join(self.application.outdir, config["name"])
config['dump'] = self.application.dump config["dump"] = self.application.dump
self.trials = yield self.nonblocking(config) self.trials = yield self.nonblocking(config)
self.write_message({'type': 'trials', self.write_message(
'data': list(trial.name for trial in self.trials) }) {
"type": "trials",
"data": list(trial.name for trial in self.trials),
}
)
except Exception as ex: except Exception as ex:
error = 'Something went wrong:\n\t{}'.format(ex) error = "Something went wrong:\n\t{}".format(ex)
logging.info(error) logging.info(error)
self.write_message({'type': 'error', self.write_message({"type": "error", "error": error})
'error': error}) self.send_log("ERROR." + self.simulation_name, error)
self.send_log('ERROR.' + self.simulation_name, error)
def get_trial(self, trial): def get_trial(self, trial):
logger.info('Available trials: %s ' % len(self.trials)) logger.info("Available trials: %s " % len(self.trials))
logger.info('Ask for : %s' % trial) logger.info("Ask for : %s" % trial)
trial = self.trials[trial] trial = self.trials[trial]
G = trial.history_to_graph() G = trial.history_to_graph()
return nx.node_link_data(G) return nx.node_link_data(G)
@@ -215,25 +262,28 @@ class SocketHandler(tornado.websocket.WebSocketHandler):
self.logger_application.removeHandler(ch) self.logger_application.removeHandler(ch)
self.capture_logging = False self.capture_logging = False
return self.capture_logging return self.capture_logging
class ModularServer(tornado.web.Application): class ModularServer(tornado.web.Application):
""" Main visualization application. """ """Main visualization application."""
port = 8001 port = 8001
page_handler = (r'/', PageHandler) page_handler = (r"/", PageHandler)
socket_handler = (r'/ws', SocketHandler) socket_handler = (r"/ws", SocketHandler)
static_handler = (r'/(.*)', tornado.web.StaticFileHandler, static_handler = (
{'path': os.path.join(ROOT, 'static')}) r"/(.*)",
local_handler = (r'/local/(.*)', tornado.web.StaticFileHandler, tornado.web.StaticFileHandler,
{'path': ''}) {"path": os.path.join(ROOT, "static")},
)
local_handler = (r"/local/(.*)", tornado.web.StaticFileHandler, {"path": ""})
handlers = [page_handler, socket_handler, static_handler, local_handler] handlers = [page_handler, socket_handler, static_handler, local_handler]
settings = {'debug': True, settings = {"debug": True, "template_path": ROOT + "/templates"}
'template_path': ROOT + '/templates'}
def __init__(
self, dump=False, outdir="output", name="SOIL", verbose=True, *args, **kwargs
):
def __init__(self, dump=False, outdir='output', name='SOIL', verbose=True, *args, **kwargs):
self.verbose = verbose self.verbose = verbose
self.name = name self.name = name
self.dump = dump self.dump = dump
@@ -243,12 +293,12 @@ class ModularServer(tornado.web.Application):
super().__init__(self.handlers, **self.settings) super().__init__(self.handlers, **self.settings)
def launch(self, port=None): def launch(self, port=None):
""" Run the app. """ """Run the app."""
if port is not None: if port is not None:
self.port = port self.port = port
url = 'http://127.0.0.1:{PORT}'.format(PORT=self.port) url = "http://127.0.0.1:{PORT}".format(PORT=self.port)
print('Interface starting at {url}'.format(url=url)) print("Interface starting at {url}".format(url=url))
self.listen(self.port) self.listen(self.port)
# webbrowser.open(url) # webbrowser.open(url)
tornado.ioloop.IOLoop.instance().start() tornado.ioloop.IOLoop.instance().start()
@@ -263,12 +313,22 @@ def run(*args, **kwargs):
def main(): def main():
import argparse import argparse
parser = argparse.ArgumentParser(description='Visualization of a Graph Model') parser = argparse.ArgumentParser(description="Visualization of a Graph Model")
parser.add_argument('--name', '-n', nargs=1, default='SOIL', help='name of the simulation') parser.add_argument(
parser.add_argument('--dump', '-d', help='dumping results in folder output', action='store_true') "--name", "-n", nargs=1, default="SOIL", help="name of the simulation"
parser.add_argument('--port', '-p', nargs=1, default=8001, help='port for launching the server') )
parser.add_argument('--verbose', '-v', help='verbose mode', action='store_true') parser.add_argument(
"--dump", "-d", help="dumping results in folder output", action="store_true"
)
parser.add_argument(
"--port", "-p", nargs=1, default=8001, help="port for launching the server"
)
parser.add_argument("--verbose", "-v", help="verbose mode", action="store_true")
args = parser.parse_args() args = parser.parse_args()
run(name=args.name, port=(args.port[0] if isinstance(args.port, list) else args.port), verbose=args.verbose) run(
name=args.name,
port=(args.port[0] if isinstance(args.port, list) else args.port),
verbose=args.verbose,
)

View File

@@ -2,4 +2,4 @@ from . import main
if __name__ == "__main__": if __name__ == "__main__":
main() main()

View File

@@ -6,11 +6,11 @@ network_params:
n: 100 n: 100
m: 2 m: 2
network_agents: network_agents:
- agent_type: ControlModelM2 - agent_class: ControlModelM2
weight: 0.1 weight: 0.1
state: state:
id: 1 id: 1
- agent_type: ControlModelM2 - agent_class: ControlModelM2
weight: 0.9 weight: 0.9
state: state:
id: 0 id: 0

View File

@@ -4,20 +4,33 @@ from simulator import Simulator
def run(simulator, name="SOIL", port=8001, verbose=False): def run(simulator, name="SOIL", port=8001, verbose=False):
server = ModularServer(simulator, name=(name[0] if isinstance(name, list) else name), verbose=verbose) server = ModularServer(
simulator, name=(name[0] if isinstance(name, list) else name), verbose=verbose
)
server.port = port server.port = port
server.launch() server.launch()
if __name__ == "__main__": if __name__ == "__main__":
parser = argparse.ArgumentParser(description='Visualization of a Graph Model') parser = argparse.ArgumentParser(description="Visualization of a Graph Model")
parser.add_argument('--name', '-n', nargs=1, default='SOIL', help='name of the simulation') parser.add_argument(
parser.add_argument('--dump', '-d', help='dumping results in folder output', action='store_true') "--name", "-n", nargs=1, default="SOIL", help="name of the simulation"
parser.add_argument('--port', '-p', nargs=1, default=8001, help='port for launching the server') )
parser.add_argument('--verbose', '-v', help='verbose mode', action='store_true') parser.add_argument(
"--dump", "-d", help="dumping results in folder output", action="store_true"
)
parser.add_argument(
"--port", "-p", nargs=1, default=8001, help="port for launching the server"
)
parser.add_argument("--verbose", "-v", help="verbose mode", action="store_true")
args = parser.parse_args() args = parser.parse_args()
soil = Simulator(dump=args.dump) soil = Simulator(dump=args.dump)
run(soil, name=args.name, port=(args.port[0] if isinstance(args.port, list) else args.port), verbose=args.verbose) run(
soil,
name=args.name,
port=(args.port[0] if isinstance(args.port, list) else args.port),
verbose=args.verbose,
)

View File

@@ -1,4 +1,4 @@
pytest pytest
mesa>=0.8.9 pytest-profiling
scipy>=1.3 scipy>=1.3
tornado tornado

View File

@@ -0,0 +1,49 @@
---
version: '2'
name: simple
group: tests
dir_path: "/tmp/"
num_trials: 3
max_time: 100
interval: 1
seed: "CompleteSeed!"
model_class: Environment
model_params:
topology:
params:
generator: complete_graph
n: 4
agents:
agent_class: CounterModel
state:
group: network
times: 1
topology: true
distribution:
- agent_class: CounterModel
weight: 0.25
state:
state_id: 0
times: 1
- agent_class: AggregatedCounter
weight: 0.5
state:
times: 2
override:
- filter:
node_id: 1
state:
name: 'Node 1'
- filter:
node_id: 2
state:
name: 'Node 2'
fixed:
- agent_class: BaseAgent
hidden: true
topology: false
state:
name: 'Environment Agent 1'
times: 10
group: environment
am_i_complete: true

37
tests/old_complete.yml Normal file
View File

@@ -0,0 +1,37 @@
---
name: simple
group: tests
dir_path: "/tmp/"
num_trials: 3
max_time: 100
interval: 1
seed: "CompleteSeed!"
network_params:
generator: complete_graph
n: 4
network_agents:
- agent_class: CounterModel
weight: 0.25
state:
state_id: 0
times: 1
- agent_class: AggregatedCounter
weight: 0.5
state:
times: 2
environment_agents:
- agent_id: 'Environment Agent 1'
agent_class: BaseAgent
state:
times: 10
environment_class: Environment
environment_params:
am_i_complete: true
agent_class: CounterModel
default_state:
times: 1
states:
1:
name: 'Node 1'
2:
name: 'Node 2'

163
tests/test_agents.py Normal file
View File

@@ -0,0 +1,163 @@
from unittest import TestCase
import pytest
from soil import agents, environment
from soil import time as stime
class Dead(agents.FSM):
@agents.default_state
@agents.state
def only(self):
return self.die()
class TestAgents(TestCase):
def test_die_returns_infinity(self):
"""The last step of a dead agent should return time.INFINITY"""
d = Dead(unique_id=0, model=environment.Environment())
ret = d.step()
assert ret == stime.NEVER
def test_die_raises_exception(self):
"""A dead agent should raise an exception if it is stepped after death"""
d = Dead(unique_id=0, model=environment.Environment())
d.step()
with pytest.raises(stime.DeadAgent):
d.step()
def test_agent_generator(self):
"""
The step function of an agent could be a generator. In that case, the state of the
agent will be resumed after every call to step.
"""
a = 0
class Gen(agents.BaseAgent):
def step(self):
nonlocal a
for i in range(5):
yield
a += 1
e = environment.Environment()
g = Gen(model=e, unique_id=e.next_id())
e.schedule.add(g)
for i in range(5):
e.step()
assert a == i
def test_state_decorator(self):
class MyAgent(agents.FSM):
run = 0
@agents.default_state
@agents.state("original")
def root(self):
self.run += 1
return self.other
@agents.state
def other(self):
self.run += 1
e = environment.Environment()
a = MyAgent(model=e, unique_id=e.next_id())
a.step()
assert a.run == 1
a.step()
def test_broadcast(self):
"""
An agent should be able to broadcast messages to every other agent, AND each receiver should be able
to process it
"""
class BCast(agents.Evented):
pings_received = 0
def step(self):
print(self.model.broadcast)
try:
self.model.broadcast("PING")
except Exception as ex:
print(ex)
while True:
self.check_messages()
yield
def on_receive(self, msg, sender=None):
self.pings_received += 1
e = environment.EventedEnvironment()
for i in range(10):
e.add_agent(agent_class=BCast)
e.step()
pings_received = lambda: [a.pings_received for a in e.agents]
assert sorted(pings_received()) == list(range(1, 11))
e.step()
assert all(x == 10 for x in pings_received())
def test_ask_messages(self):
"""
An agent should be able to ask another agent, and wait for a response.
"""
# There are two agents, they try to send pings
# This is arguably a very contrived example. In practice, the or
# There should be a delay of one step between agent 0 and 1
# On the first step:
# Agent 0 sends a PING, but blocks before a PONG
# Agent 1 detects the PING, responds with a PONG, and blocks after its own PING
# After that step, every agent can both receive (there are pending messages) and send.
# In each step, for each agent, one message is sent, and another one is received
# (although not necessarily in that order).
# Results depend on ordering (agents are normally shuffled)
# so we force the timedactivation not to be shuffled
pings = []
pongs = []
responses = []
class Ping(agents.EventedAgent):
def step(self):
target_id = (self.unique_id + 1) % self.count_agents()
target = self.model.agents[target_id]
print("starting")
while True:
if pongs or not pings: # First agent, or anyone after that
pings.append(self.now)
response = yield target.ask("PING")
responses.append(response)
else:
print("NOT sending ping")
print("Checking msgs")
# Do not block if we have already received a PING
if not self.check_messages():
yield self.received()
print("done")
def on_receive(self, msg, sender=None):
if msg == "PING":
pongs.append(self.now)
return "PONG"
raise Exception("This should never happen")
e = environment.EventedEnvironment(schedule_class=stime.OrderedTimedActivation)
for i in range(2):
e.add_agent(agent_class=Ping)
assert e.now == 0
for i in range(5):
e.step()
time = i + 1
assert e.now == time
assert len(pings) == 2 * time
assert len(pongs) == (2 * time) - 1
# Every step between 0 and t appears twice
assert sum(pings) == sum(range(time)) * 2
# It is the same as pings, without the leading 0
assert sum(pongs) == sum(range(time)) * 2

View File

@@ -1,90 +0,0 @@
from unittest import TestCase
import os
import pandas as pd
import yaml
from functools import partial
from os.path import join
from soil import simulation, analysis, agents
ROOT = os.path.abspath(os.path.dirname(__file__))
class Ping(agents.FSM):
defaults = {
'count': 0,
}
@agents.default_state
@agents.state
def even(self):
self.debug(f'Even {self["count"]}')
self['count'] += 1
return self.odd
@agents.state
def odd(self):
self.debug(f'Odd {self["count"]}')
self['count'] += 1
return self.even
class TestAnalysis(TestCase):
# Code to generate a simple sqlite history
def setUp(self):
"""
The initial states should be applied to the agent and the
agent should be able to update its state."""
config = {
'name': 'analysis',
'seed': 'seed',
'network_params': {
'generator': 'complete_graph',
'n': 2
},
'agent_type': Ping,
'states': [{'interval': 1}, {'interval': 2}],
'max_time': 30,
'num_trials': 1,
'environment_params': {
}
}
s = simulation.from_config(config)
self.env = s.run_simulation(dry_run=True)[0]
def test_saved(self):
env = self.env
assert env.get_agent(0)['count', 0] == 1
assert env.get_agent(0)['count', 29] == 30
assert env.get_agent(1)['count', 0] == 1
assert env.get_agent(1)['count', 29] == 15
assert env['env', 29, None]['SEED'] == env['env', 29, 'SEED']
def test_count(self):
env = self.env
df = analysis.read_sql(env._history.db_path)
res = analysis.get_count(df, 'SEED', 'state_id')
assert res['SEED'][self.env['SEED']].iloc[0] == 1
assert res['SEED'][self.env['SEED']].iloc[-1] == 1
assert res['state_id']['odd'].iloc[0] == 2
assert res['state_id']['even'].iloc[0] == 0
assert res['state_id']['odd'].iloc[-1] == 1
assert res['state_id']['even'].iloc[-1] == 1
def test_value(self):
env = self.env
df = analysis.read_sql(env._history.db_path)
res_sum = analysis.get_value(df, 'count')
assert res_sum['count'].iloc[0] == 2
import numpy as np
res_mean = analysis.get_value(df, 'count', aggfunc=np.mean)
assert res_mean['count'].iloc[15] == (16+8)/2
res_total = analysis.get_majority(df)
res_total['SEED'].iloc[0] == self.env['SEED']

147
tests/test_config.py Normal file
View File

@@ -0,0 +1,147 @@
from unittest import TestCase
import os
import yaml
import copy
from os.path import join
from soil import simulation, serialization, config, network, agents, utils
ROOT = os.path.abspath(os.path.dirname(__file__))
EXAMPLES = join(ROOT, "..", "examples")
FORCE_TESTS = os.environ.get("FORCE_TESTS", "")
def isequal(a, b):
if isinstance(a, dict):
for (k, v) in a.items():
if v:
isequal(a[k], b[k])
else:
assert not b.get(k, None)
return
assert a == b
class TestConfig(TestCase):
def test_conversion(self):
expected = serialization.load_file(join(ROOT, "complete_converted.yml"))[0]
old = serialization.load_file(join(ROOT, "old_complete.yml"))[0]
converted_defaults = config.convert_old(old, strict=False)
converted = converted_defaults.dict(exclude_unset=True)
isequal(converted, expected)
def test_configuration_changes(self):
"""
The configuration should not change after running
the simulation.
"""
config = serialization.load_file(join(EXAMPLES, "complete.yml"))[0]
s = simulation.from_config(config)
init_config = copy.copy(s.to_dict())
s.run_simulation(dry_run=True)
nconfig = s.to_dict()
# del nconfig['to
isequal(init_config, nconfig)
def test_topology_config(self):
netconfig = config.NetConfig(**{"path": join(ROOT, "test.gexf")})
net = network.from_config(netconfig, dir_path=ROOT)
assert len(net.nodes) == 2
assert len(net.edges) == 1
def test_env_from_config(self):
"""
Simple configuration that tests that the graph is loaded, and that
network agents are initialized properly.
"""
cfg = {
"name": "CounterAgent",
"network_params": {"path": join(ROOT, "test.gexf")},
"agent_class": "CounterModel",
# 'states': [{'times': 10}, {'times': 20}],
"max_time": 2,
"dry_run": True,
"num_trials": 1,
"environment_params": {},
}
conf = config.convert_old(cfg)
s = simulation.from_config(conf)
env = s.get_env()
assert len(env.G.nodes) == 2
assert len(env.G.edges) == 1
assert len(env.agents) == 2
assert env.agents[0].G == env.G
def test_agents_from_config(self):
"""We test that the known complete configuration produces
the right agents in the right groups"""
cfg = serialization.load_file(join(ROOT, "complete_converted.yml"))[0]
s = simulation.from_config(cfg)
env = s.get_env()
assert len(env.G.nodes) == 4
assert len(env.agents(group="network")) == 4
assert len(env.agents(group="environment")) == 1
def test_yaml(self):
"""
The YAML version of a newly created configuration should be equivalent
to the configuration file used.
Values not present in the original config file should have reasonable
defaults.
"""
with utils.timer("loading"):
config = serialization.load_file(join(EXAMPLES, "complete.yml"))[0]
s = simulation.from_config(config)
with utils.timer("serializing"):
serial = s.to_yaml()
with utils.timer("recovering"):
recovered = yaml.load(serial, Loader=yaml.SafeLoader)
for (k, v) in config.items():
assert recovered[k] == v
def make_example_test(path, cfg):
def wrapped(self):
root = os.getcwd()
print(path)
s = simulation.from_config(cfg)
# for s in simulation.all_from_config(path):
# iterations = s.config.max_time * s.config.num_trials
# if iterations > 1000:
# s.config.max_time = 100
# s.config.num_trials = 1
# if config.get('skip_test', False) and not FORCE_TESTS:
# self.skipTest('Example ignored.')
# envs = s.run_simulation(dry_run=True)
# assert envs
# for env in envs:
# assert env
# try:
# n = config['network_params']['n']
# assert len(list(env.network_agents)) == n
# assert env.now > 0 # It has run
# assert env.now <= config['max_time'] # But not further than allowed
# except KeyError:
# pass
return wrapped
def add_example_tests():
for config, path in serialization.load_files(
join(EXAMPLES, "*", "*.yml"),
join(EXAMPLES, "*.yml"),
):
p = make_example_test(path=path, cfg=config)
fname = os.path.basename(path)
p.__name__ = "test_example_file_%s" % fname
p.__doc__ = "%s should be a valid configuration" % fname
setattr(TestConfig, p.__name__, p)
del p
add_example_tests()

View File

@@ -2,51 +2,54 @@ from unittest import TestCase
import os import os
from os.path import join from os.path import join
from soil import serialization, simulation from soil import serialization, simulation, config
ROOT = os.path.abspath(os.path.dirname(__file__)) ROOT = os.path.abspath(os.path.dirname(__file__))
EXAMPLES = join(ROOT, '..', 'examples') EXAMPLES = join(ROOT, "..", "examples")
FORCE_TESTS = os.environ.get('FORCE_TESTS', '') FORCE_TESTS = os.environ.get("FORCE_TESTS", "")
class TestExamples(TestCase): class TestExamples(TestCase):
pass pass
def make_example_test(path, config): def make_example_test(path, cfg):
def wrapped(self): def wrapped(self):
root = os.getcwd() root = os.getcwd()
for s in simulation.all_from_config(path): for s in simulation.iter_from_config(cfg):
iterations = s.max_time * s.num_trials iterations = s.max_steps * s.num_trials
if iterations > 1000: if iterations < 0 or iterations > 1000:
s.max_time = 100 s.max_steps = 100
s.num_trials = 1 s.num_trials = 1
if config.get('skip_test', False) and not FORCE_TESTS: assert isinstance(cfg, config.Config)
self.skipTest('Example ignored.') if getattr(cfg, "skip_test", False) and not FORCE_TESTS:
self.skipTest("Example ignored.")
envs = s.run_simulation(dry_run=True) envs = s.run_simulation(dry_run=True)
assert envs assert envs
for env in envs: for env in envs:
assert env assert env
try: try:
n = config['network_params']['n'] n = cfg.model_params["network_params"]["n"]
assert len(list(env.network_agents)) == n assert len(list(env.network_agents)) == n
assert env.now > 0 # It has run
assert env.now <= config['max_time'] # But not further than allowed
except KeyError: except KeyError:
pass pass
assert env.schedule.steps > 0 # It has run
assert env.schedule.steps <= s.max_steps # But not further than allowed
return wrapped return wrapped
def add_example_tests(): def add_example_tests():
for config, path in serialization.load_files( for cfg, path in serialization.load_files(
join(EXAMPLES, '*', '*.yml'), join(EXAMPLES, "**", "*.yml"),
join(EXAMPLES, '*.yml'),
): ):
p = make_example_test(path=path, config=config) if "soil_output" in path:
continue
p = make_example_test(path=path, cfg=config.Config.from_raw(cfg))
fname = os.path.basename(path) fname = os.path.basename(path)
p.__name__ = 'test_example_file_%s' % fname p.__name__ = "test_example_file_%s" % fname
p.__doc__ = '%s should be a valid configuration' % fname p.__doc__ = "%s should be a valid configuration" % fname
setattr(TestExamples, p.__name__, p) setattr(TestExamples, p.__name__, p)
del p del p

View File

@@ -2,13 +2,13 @@ import os
import io import io
import tempfile import tempfile
import shutil import shutil
from time import time import sqlite3
from unittest import TestCase from unittest import TestCase
from soil import exporters from soil import exporters
from soil import simulation from soil import simulation
from soil import agents
from soil.stats import distribution
class Dummy(exporters.Exporter): class Dummy(exporters.Exporter):
started = False started = False
@@ -19,82 +19,93 @@ class Dummy(exporters.Exporter):
called_trial = 0 called_trial = 0
called_end = 0 called_end = 0
def start(self): def sim_start(self):
self.__class__.called_start += 1 self.__class__.called_start += 1
self.__class__.started = True self.__class__.started = True
def trial(self, env, stats): def trial_end(self, env):
assert env assert env
self.__class__.trials += 1 self.__class__.trials += 1
self.__class__.total_time += env.now self.__class__.total_time += env.now
self.__class__.called_trial += 1 self.__class__.called_trial += 1
def end(self, stats): def sim_end(self):
self.__class__.ended = True self.__class__.ended = True
self.__class__.called_end += 1 self.__class__.called_end += 1
class Exporters(TestCase): class Exporters(TestCase):
def test_basic(self): def test_basic(self):
# We need to add at least one agent to make sure the scheduler
# ticks every step
num_trials = 5
max_time = 2
config = { config = {
'name': 'exporter_sim', "name": "exporter_sim",
'network_params': {}, "model_params": {"agents": [{"agent_class": agents.BaseAgent}]},
'agent_type': 'CounterModel', "max_time": max_time,
'max_time': 2, "num_trials": num_trials,
'num_trials': 5,
'environment_params': {}
} }
s = simulation.from_config(config) s = simulation.from_config(config)
for env in s.run_simulation(exporters=[Dummy], dry_run=True): for env in s.run_simulation(exporters=[Dummy], dry_run=True):
assert env.now <= 2 assert len(env.agents) == 1
assert Dummy.started assert Dummy.started
assert Dummy.ended assert Dummy.ended
assert Dummy.called_start == 1 assert Dummy.called_start == 1
assert Dummy.called_end == 1 assert Dummy.called_end == 1
assert Dummy.called_trial == 5 assert Dummy.called_trial == num_trials
assert Dummy.trials == 5 assert Dummy.trials == num_trials
assert Dummy.total_time == 2*5 assert Dummy.total_time == max_time * num_trials
def test_writing(self): def test_writing(self):
'''Try to write CSV, GEXF, sqlite and YAML (without dry_run)''' """Try to write CSV, sqlite and YAML (without dry_run)"""
n_trials = 5 n_trials = 5
config = { config = {
'name': 'exporter_sim', "name": "exporter_sim",
'network_params': { "network_params": {"generator": "complete_graph", "n": 4},
'generator': 'complete_graph', "agent_class": "CounterModel",
'n': 4 "max_time": 2,
}, "num_trials": n_trials,
'agent_type': 'CounterModel', "dry_run": False,
'max_time': 2, "environment_params": {},
'num_trials': n_trials,
'environment_params': {}
} }
output = io.StringIO() output = io.StringIO()
s = simulation.from_config(config) s = simulation.from_config(config)
tmpdir = tempfile.mkdtemp() tmpdir = tempfile.mkdtemp()
envs = s.run_simulation(exporters=[ envs = s.run_simulation(
exporters.default, exporters=[
exporters.csv, exporters.default,
exporters.gexf, exporters.csv,
], ],
stats=[distribution,], model_params={
outdir=tmpdir, "agent_reporters": {"times": "times"},
exporter_params={'copy_to': output}) "model_reporters": {
"constant": lambda x: 1,
},
},
dry_run=False,
outdir=tmpdir,
exporter_params={"copy_to": output},
)
result = output.getvalue() result = output.getvalue()
simdir = os.path.join(tmpdir, s.group or '', s.name) simdir = os.path.join(tmpdir, s.group or "", s.name)
with open(os.path.join(simdir, '{}.dumped.yml'.format(s.name))) as f: with open(os.path.join(simdir, "{}.dumped.yml".format(s.name))) as f:
result = f.read() result = f.read()
assert result assert result
try: try:
for e in envs: for e in envs:
with open(os.path.join(simdir, '{}.gexf'.format(e.name))) as f: db = sqlite3.connect(os.path.join(simdir, f"{s.name}.sqlite"))
result = f.read() cur = db.cursor()
assert result agent_entries = cur.execute("SELECT * from agents").fetchall()
env_entries = cur.execute("SELECT * from env").fetchall()
assert len(agent_entries) > 0
assert len(env_entries) > 0
with open(os.path.join(simdir, '{}.csv'.format(e.name))) as f: with open(os.path.join(simdir, "{}.env.csv".format(e.id))) as f:
result = f.read() result = f.read()
assert result assert result
finally: finally:

View File

@@ -1,256 +1,152 @@
from unittest import TestCase from unittest import TestCase
import os import os
import io
import yaml
import pickle import pickle
import networkx as nx import networkx as nx
from functools import partial from functools import partial
from os.path import join from os.path import join
from soil import (simulation, Environment, agents, serialization, from soil import simulation, Environment, agents, network, serialization, utils, config
utils)
from soil.time import Delta from soil.time import Delta
ROOT = os.path.abspath(os.path.dirname(__file__)) ROOT = os.path.abspath(os.path.dirname(__file__))
EXAMPLES = join(ROOT, '..', 'examples') EXAMPLES = join(ROOT, "..", "examples")
class CustomAgent(agents.FSM): class CustomAgent(agents.FSM, agents.NetworkAgent):
@agents.default_state @agents.default_state
@agents.state @agents.state
def normal(self): def normal(self):
self.neighbors = self.count_agents(state_id='normal', self.neighbors = self.count_agents(state_id="normal", limit_neighbors=True)
limit_neighbors=True)
@agents.state @agents.state
def unreachable(self): def unreachable(self):
return return
class TestMain(TestCase): class TestMain(TestCase):
def test_load_graph(self):
"""
Load a graph from file if the extension is known.
Raise an exception otherwise.
"""
config = {
'network_params': {
'path': join(ROOT, 'test.gexf')
}
}
G = serialization.load_network(config['network_params'])
assert G
assert len(G) == 2
with self.assertRaises(AttributeError):
config = {
'network_params': {
'path': join(ROOT, 'unknown.extension')
}
}
G = serialization.load_network(config['network_params'])
print(G)
def test_generate_barabasi(self):
"""
If no path is given, a generator and network parameters
should be used to generate a network
"""
config = {
'network_params': {
'generator': 'barabasi_albert_graph'
}
}
with self.assertRaises(TypeError):
G = serialization.load_network(config['network_params'])
config['network_params']['n'] = 100
config['network_params']['m'] = 10
G = serialization.load_network(config['network_params'])
assert len(G) == 100
def test_empty_simulation(self): def test_empty_simulation(self):
"""A simulation with a base behaviour should do nothing""" """A simulation with a base behaviour should do nothing"""
config = { config = {
'network_params': { "model_params": {
'path': join(ROOT, 'test.gexf') "network_params": {"path": join(ROOT, "test.gexf")},
}, "agent_class": "BaseAgent",
'agent_type': 'BaseAgent',
'environment_params': {
} }
} }
s = simulation.from_config(config) s = simulation.from_config(config)
s.run_simulation(dry_run=True) s.run_simulation(dry_run=True)
def test_network_agent(self):
"""
The initial states should be applied to the agent and the
agent should be able to update its state."""
config = {
"name": "CounterAgent",
"num_trials": 1,
"max_time": 2,
"model_params": {
"network_params": {
"generator": nx.complete_graph,
"n": 2,
},
"agent_class": "CounterModel",
"states": {
0: {"times": 10},
1: {"times": 20},
},
},
}
s = simulation.from_config(config)
def test_counter_agent(self): def test_counter_agent(self):
""" """
The initial states should be applied to the agent and the The initial states should be applied to the agent and the
agent should be able to update its state.""" agent should be able to update its state."""
config = { config = {
'name': 'CounterAgent', "version": "2",
'network_params': { "name": "CounterAgent",
'path': join(ROOT, 'test.gexf') "dry_run": True,
"num_trials": 1,
"max_time": 2,
"model_params": {
"topology": {"path": join(ROOT, "test.gexf")},
"agents": {
"agent_class": "CounterModel",
"topology": True,
"fixed": [{"state": {"times": 10}}, {"state": {"times": 20}}],
},
}, },
'agent_type': 'CounterModel',
'states': [{'times': 10}, {'times': 20}],
'max_time': 2,
'num_trials': 1,
'environment_params': {
}
} }
s = simulation.from_config(config) s = simulation.from_config(config)
env = s.run_simulation(dry_run=True)[0] env = s.get_env()
assert env.get_agent(0)['times', 0] == 11 assert isinstance(env.agents[0], agents.CounterModel)
assert env.get_agent(0)['times', 1] == 12 assert env.agents[0].G == env.G
assert env.get_agent(1)['times', 0] == 21 assert env.agents[0]["times"] == 10
assert env.get_agent(1)['times', 1] == 22 assert env.agents[0]["times"] == 10
env.step()
assert env.agents[0]["times"] == 11
assert env.agents[1]["times"] == 21
def test_counter_agent_history(self): def test_init_and_count_agents(self):
""" """Agents should be properly initialized and counting should filter them properly"""
The evolution of the state should be recorded in the logging agent # TODO: separate this test into two or more test cases
"""
config = { config = {
'name': 'CounterAgent', "max_time": 10,
'network_params': { "model_params": {
'path': join(ROOT, 'test.gexf') "agents": [
{"agent_class": CustomAgent, "weight": 1, "topology": True},
{"agent_class": CustomAgent, "weight": 3, "topology": True},
],
"topology": {"path": join(ROOT, "test.gexf")},
}, },
'network_agents': [{
'agent_type': 'AggregatedCounter',
'weight': 1,
'state': {'state_id': 0}
}],
'max_time': 10,
'environment_params': {
}
} }
s = simulation.from_config(config) s = simulation.from_config(config)
env = s.run_simulation(dry_run=True)[0] env = s.run_simulation(dry_run=True)[0]
for agent in env.network_agents: assert env.agents[0].weight == 1
last = 0 assert env.count_agents() == 2
assert len(agent[None, None]) == 11 assert env.count_agents(weight=1) == 1
for step, total in sorted(agent['total', None]): assert env.count_agents(weight=3) == 1
assert total == last + 2 assert env.count_agents(agent_class=CustomAgent) == 2
last = total
def test_custom_agent(self):
"""Allow for search of neighbors with a certain state_id"""
config = {
'network_params': {
'path': join(ROOT, 'test.gexf')
},
'network_agents': [{
'agent_type': CustomAgent,
'weight': 1
}],
'max_time': 10,
'environment_params': {
}
}
s = simulation.from_config(config)
env = s.run_simulation(dry_run=True)[0]
assert env.get_agent(1).count_agents(state_id='normal') == 2
assert env.get_agent(1).count_agents(state_id='normal', limit_neighbors=True) == 1
assert env.get_agent(0).neighbors == 1
def test_torvalds_example(self): def test_torvalds_example(self):
"""A complete example from a documentation should work.""" """A complete example from a documentation should work."""
config = serialization.load_file(join(EXAMPLES, 'torvalds.yml'))[0] config = serialization.load_file(join(EXAMPLES, "torvalds.yml"))[0]
config['network_params']['path'] = join(EXAMPLES, config["model_params"]["network_params"]["path"] = join(
config['network_params']['path']) EXAMPLES, config["model_params"]["network_params"]["path"]
)
s = simulation.from_config(config) s = simulation.from_config(config)
env = s.run_simulation(dry_run=True)[0] env = s.run_simulation(dry_run=True)[0]
for a in env.network_agents: for a in env.network_agents:
skill_level = a.state['skill_level'] skill_level = a.state["skill_level"]
if a.id == 'Torvalds': if a.id == "Torvalds":
assert skill_level == 'God' assert skill_level == "God"
assert a.state['total'] == 3 assert a.state["total"] == 3
assert a.state['neighbors'] == 2 assert a.state["neighbors"] == 2
elif a.id == 'balkian': elif a.id == "balkian":
assert skill_level == 'developer' assert skill_level == "developer"
assert a.state['total'] == 3 assert a.state["total"] == 3
assert a.state['neighbors'] == 1 assert a.state["neighbors"] == 1
else: else:
assert skill_level == 'beginner' assert skill_level == "beginner"
assert a.state['total'] == 3 assert a.state["total"] == 3
assert a.state['neighbors'] == 1 assert a.state["neighbors"] == 1
def test_yaml(self):
"""
The YAML version of a newly created simulation
should be equivalent to the configuration file used
"""
with utils.timer('loading'):
config = serialization.load_file(join(EXAMPLES, 'complete.yml'))[0]
s = simulation.from_config(config)
with utils.timer('serializing'):
serial = s.to_yaml()
with utils.timer('recovering'):
recovered = yaml.load(serial, Loader=yaml.SafeLoader)
with utils.timer('deleting'):
del recovered['topology']
assert config == recovered
def test_configuration_changes(self):
"""
The configuration should not change after running
the simulation.
"""
config = serialization.load_file(join(EXAMPLES, 'complete.yml'))[0]
s = simulation.from_config(config)
s.run_simulation(dry_run=True)
nconfig = s.to_dict()
del nconfig['topology']
assert config == nconfig
def test_row_conversion(self):
env = Environment()
env['test'] = 'test_value'
res = list(env.history_to_tuples())
assert len(res) == len(env.environment_params)
env.schedule.time = 1
env['test'] = 'second_value'
res = list(env.history_to_tuples())
assert env['env', 0, 'test' ] == 'test_value'
assert env['env', 1, 'test' ] == 'second_value'
def test_save_geometric(self):
"""
There is a bug in networkx that prevents it from creating a GEXF file
from geometric models. We should work around it.
"""
G = nx.random_geometric_graph(20, 0.1)
env = Environment(topology=G)
f = io.BytesIO()
env.dump_gexf(f)
def test_save_graph(self):
'''
The history_to_graph method should return a valid networkx graph.
The state of the agent should be encoded as intervals in the nx graph.
'''
G = nx.cycle_graph(5)
distribution = agents.calculate_distribution(None, agents.BaseAgent)
env = Environment(topology=G, network_agents=distribution)
env[0, 0, 'testvalue'] = 'start'
env[0, 10, 'testvalue'] = 'finish'
nG = env.history_to_graph()
values = nG.nodes[0]['attr_testvalue']
assert ('start', 0, 10) in values
assert ('finish', 10, None) in values
def test_serialize_class(self): def test_serialize_class(self):
ser, name = serialization.serialize(agents.BaseAgent) ser, name = serialization.serialize(agents.BaseAgent, known_modules=[])
assert name == 'soil.agents.BaseAgent' assert name == "soil.agents.BaseAgent"
assert ser == agents.BaseAgent
ser, name = serialization.serialize(
agents.BaseAgent,
known_modules=[
"soil",
],
)
assert name == "BaseAgent"
assert ser == agents.BaseAgent assert ser == agents.BaseAgent
ser, name = serialization.serialize(CustomAgent) ser, name = serialization.serialize(CustomAgent)
assert name == 'test_main.CustomAgent' assert name == "test_main.CustomAgent"
assert ser == CustomAgent assert ser == CustomAgent
pickle.dumps(ser) pickle.dumps(ser)
@@ -262,99 +158,43 @@ class TestMain(TestCase):
des = serialization.deserialize(name, ser) des = serialization.deserialize(name, ser)
assert i == des assert i == des
def test_serialize_agent_type(self): def test_serialize_agent_class(self):
'''A class from soil.agents should be serialized without the module part''' """A class from soil.agents should be serialized without the module part"""
ser = agents.serialize_type(CustomAgent) ser = agents._serialize_type(CustomAgent)
assert ser == 'test_main.CustomAgent' assert ser == "test_main.CustomAgent"
ser = agents.serialize_type(agents.BaseAgent) ser = agents._serialize_type(agents.BaseAgent)
assert ser == 'BaseAgent' assert ser == "BaseAgent"
pickle.dumps(ser) pickle.dumps(ser)
def test_deserialize_agent_distribution(self):
agent_distro = [
{
'agent_type': 'CounterModel',
'weight': 1
},
{
'agent_type': 'test_main.CustomAgent',
'weight': 2
},
]
converted = agents.deserialize_definition(agent_distro)
assert converted[0]['agent_type'] == agents.CounterModel
assert converted[1]['agent_type'] == CustomAgent
pickle.dumps(converted)
def test_serialize_agent_distribution(self):
agent_distro = [
{
'agent_type': agents.CounterModel,
'weight': 1
},
{
'agent_type': CustomAgent,
'weight': 2
},
]
converted = agents.serialize_definition(agent_distro)
assert converted[0]['agent_type'] == 'CounterModel'
assert converted[1]['agent_type'] == 'test_main.CustomAgent'
pickle.dumps(converted)
def test_pickle_agent_environment(self):
env = Environment(name='Test')
a = agents.BaseAgent(model=env, unique_id=25)
a['key'] = 'test'
pickled = pickle.dumps(a)
recovered = pickle.loads(pickled)
assert recovered.env.name == 'Test'
assert list(recovered.env._history.to_tuples())
assert recovered['key', 0] == 'test'
assert recovered['key'] == 'test'
def test_subgraph(self):
'''An agent should be able to subgraph the global topology'''
G = nx.Graph()
G.add_node(3)
G.add_edge(1, 2)
distro = agents.calculate_distribution(agent_type=agents.NetworkAgent)
env = Environment(name='Test', topology=G, network_agents=distro)
lst = list(env.network_agents)
a2 = env.get_agent(2)
a3 = env.get_agent(3)
assert len(a2.subgraph(limit_neighbors=True)) == 2
assert len(a3.subgraph(limit_neighbors=True)) == 1
assert len(a3.subgraph(limit_neighbors=True, center=False)) == 0
assert len(a3.subgraph(agent_type=agents.NetworkAgent)) == 3
def test_templates(self): def test_templates(self):
'''Loading a template should result in several configs''' """Loading a template should result in several configs"""
configs = serialization.load_file(join(EXAMPLES, 'template.yml')) configs = serialization.load_file(join(EXAMPLES, "template.yml"))
assert len(configs) > 0 assert len(configs) > 0
def test_until(self): def test_until(self):
config = { n_runs = 0
'name': 'until_sim',
'network_params': {}, class CheckRun(agents.BaseAgent):
'agent_type': 'CounterModel', def step(self):
'max_time': 2, nonlocal n_runs
'num_trials': 50, n_runs += 1
'environment_params': {} return super().step()
}
s = simulation.from_config(config) n_trials = 50
max_time = 2
s = simulation.Simulation(
model_params={"agents": [{"agent_class": CheckRun}]},
num_trials=n_trials,
max_time=max_time,
)
runs = list(s.run_simulation(dry_run=True)) runs = list(s.run_simulation(dry_run=True))
over = list(x.now for x in runs if x.now>2) over = list(x.now for x in runs if x.now > 2)
assert len(runs) == config['num_trials'] assert len(runs) == n_trials
assert len(over) == 0 assert len(over) == 0
def test_fsm(self): def test_fsm(self):
'''Basic state change''' """Basic state change"""
class ToggleAgent(agents.FSM): class ToggleAgent(agents.FSM):
@agents.default_state @agents.default_state
@agents.state @agents.state
@@ -373,7 +213,8 @@ class TestMain(TestCase):
assert a.state_id == a.ping.id assert a.state_id == a.ping.id
def test_fsm_when(self): def test_fsm_when(self):
'''Basic state change''' """Basic state change"""
class ToggleAgent(agents.FSM): class ToggleAgent(agents.FSM):
@agents.default_state @agents.default_state
@agents.state @agents.state

View File

@@ -1,4 +1,4 @@
''' """
Mesa-SOIL integration tests Mesa-SOIL integration tests
We have to test that: We have to test that:
@@ -8,13 +8,15 @@ We have to test that:
- Mesa visualizations work with SOIL simulations - Mesa visualizations work with SOIL simulations
''' """
from mesa import Agent, Model from mesa import Agent, Model
from mesa.time import RandomActivation from mesa.time import RandomActivation
from mesa.space import MultiGrid from mesa.space import MultiGrid
class MoneyAgent(Agent): class MoneyAgent(Agent):
""" An agent with fixed initial wealth.""" """An agent with fixed initial wealth."""
def __init__(self, unique_id, model): def __init__(self, unique_id, model):
super().__init__(unique_id, model) super().__init__(unique_id, model)
self.wealth = 1 self.wealth = 1
@@ -33,15 +35,15 @@ class MoneyAgent(Agent):
def move(self): def move(self):
possible_steps = self.model.grid.get_neighborhood( possible_steps = self.model.grid.get_neighborhood(
self.pos, self.pos, moore=True, include_center=False
moore=True, )
include_center=False)
new_position = self.random.choice(possible_steps) new_position = self.random.choice(possible_steps)
self.model.grid.move_agent(self, new_position) self.model.grid.move_agent(self, new_position)
class MoneyModel(Model): class MoneyModel(Model):
"""A model with some number of agents.""" """A model with some number of agents."""
def __init__(self, N, width, height): def __init__(self, N, width, height):
self.num_agents = N self.num_agents = N
self.grid = MultiGrid(width, height, True) self.grid = MultiGrid(width, height, True)
@@ -58,7 +60,7 @@ class MoneyModel(Model):
self.grid.place_agent(a, (x, y)) self.grid.place_agent(a, (x, y))
def step(self): def step(self):
'''Advance the model by one step.''' """Advance the model by one step."""
self.schedule.step() self.schedule.step()

110
tests/test_network.py Normal file
View File

@@ -0,0 +1,110 @@
from unittest import TestCase
import io
import os
import networkx as nx
from os.path import join
from soil import config, network, environment, agents, simulation
from test_main import CustomAgent
ROOT = os.path.abspath(os.path.dirname(__file__))
EXAMPLES = join(ROOT, "..", "examples")
class TestNetwork(TestCase):
def test_load_graph(self):
"""
Load a graph from file if the extension is known.
Raise an exception otherwise.
"""
config = {"network_params": {"path": join(ROOT, "test.gexf")}}
G = network.from_config(config["network_params"])
assert G
assert len(G) == 2
with self.assertRaises(AttributeError):
config = {"network_params": {"path": join(ROOT, "unknown.extension")}}
G = network.from_config(config["network_params"])
print(G)
def test_generate_barabasi(self):
"""
If no path is given, a generator and network parameters
should be used to generate a network
"""
cfg = {"params": {"generator": "barabasi_albert_graph"}}
with self.assertRaises(Exception):
G = network.from_config(cfg)
cfg["params"]["n"] = 100
cfg["params"]["m"] = 10
G = network.from_config(cfg)
assert len(G) == 100
def test_save_geometric(self):
"""
There is a bug in networkx that prevents it from creating a GEXF file
from geometric models. We should work around it.
"""
G = nx.random_geometric_graph(20, 0.1)
env = environment.NetworkEnvironment(topology=G)
f = io.BytesIO()
assert env.G
network.dump_gexf(env.G, f)
def test_networkenvironment_creation(self):
"""Networkenvironment should accept netconfig as parameters"""
model_params = {
"topology": {"path": join(ROOT, "test.gexf")},
"agents": {
"topology": True,
"distribution": [
{
"agent_class": CustomAgent,
}
],
},
}
env = environment.Environment(**model_params)
assert env.G
env.step()
assert len(env.G) == 2
assert len(env.agents) == 2
assert env.agents[1].count_agents(state_id="normal") == 2
assert env.agents[1].count_agents(state_id="normal", limit_neighbors=True) == 1
assert env.agents[0].count_neighbors() == 1
def test_custom_agent_neighbors(self):
"""Allow for search of neighbors with a certain state_id"""
config = {
"model_params": {
"topology": {"path": join(ROOT, "test.gexf")},
"agents": {
"topology": True,
"distribution": [{"weight": 1, "agent_class": CustomAgent}],
},
},
"max_time": 10,
}
s = simulation.from_config(config)
env = s.run_simulation(dry_run=True)[0]
assert env.agents[1].count_agents(state_id="normal") == 2
assert env.agents[1].count_agents(state_id="normal", limit_neighbors=True) == 1
assert env.agents[0].count_neighbors() == 1
def test_subgraph(self):
"""An agent should be able to subgraph the global topology"""
G = nx.Graph()
G.add_node(3)
G.add_edge(1, 2)
distro = agents.calculate_distribution(agent_class=agents.NetworkAgent)
aconfig = config.AgentConfig(distribution=distro, topology=True)
env = environment.Environment(name="Test", topology=G, agents=aconfig)
lst = list(env.network_agents)
a2 = env.find_one(node_id=2)
a3 = env.find_one(node_id=3)
assert len(a2.subgraph(limit_neighbors=True)) == 2
assert len(a3.subgraph(limit_neighbors=True)) == 1
assert len(a3.subgraph(limit_neighbors=True, center=False)) == 0
assert len(a3.subgraph(agent_class=agents.NetworkAgent)) == 3

View File

@@ -1,34 +0,0 @@
from unittest import TestCase
from soil import simulation, stats
from soil.utils import unflatten_dict
class Stats(TestCase):
def test_distribution(self):
'''The distribution exporter should write the number of agents in each state'''
config = {
'name': 'exporter_sim',
'network_params': {
'generator': 'complete_graph',
'n': 4
},
'agent_type': 'CounterModel',
'max_time': 2,
'num_trials': 5,
'environment_params': {}
}
s = simulation.from_config(config)
for env in s.run_simulation(stats=[stats.distribution]):
pass
# stats_res = unflatten_dict(dict(env._history['stats', -1, None]))
allstats = s.get_stats()
for stat in allstats:
assert 'count' in stat
assert 'mean' in stat
if 'trial_id' in stat:
assert stat['mean']['neighbors'] == 3
assert stat['count']['total']['4'] == 4
else:
assert stat['count']['count']['neighbors']['3'] == 20
assert stat['mean']['min']['neighbors'] == stat['mean']['max']['neighbors']

74
tests/test_time.py Normal file
View File

@@ -0,0 +1,74 @@
from unittest import TestCase
from soil import time, agents, environment
class TestMain(TestCase):
def test_cond(self):
"""
A condition should match a When if the concition is True
"""
t = time.Cond(lambda t: True)
f = time.Cond(lambda t: False)
for i in range(10):
w = time.When(i)
assert w == t
assert w is not f
def test_cond(self):
"""
Comparing a Cond to a Delta should always return False
"""
c = time.Cond(lambda t: False)
d = time.Delta(1)
assert c is not d
def test_cond_env(self):
""" """
times_started = []
times_awakened = []
times_asleep = []
times = []
done = []
class CondAgent(agents.BaseAgent):
def step(self):
nonlocal done
times_started.append(self.now)
while True:
times_asleep.append(self.now)
yield time.Cond(lambda agent: agent.now >= 10, delta=2)
times_awakened.append(self.now)
if self.now >= 10:
break
done.append(self.now)
env = environment.Environment(agents=[{"agent_class": CondAgent}])
while env.schedule.time < 11:
times.append(env.now)
env.step()
assert env.schedule.time == 11
assert times_started == [0]
assert times_awakened == [10]
assert done == [10]
# The first time will produce the Cond.
assert env.schedule.steps == 6
assert len(times) == 6
while env.schedule.time < 13:
times.append(env.now)
env.step()
assert times == [0, 2, 4, 6, 8, 10, 11]
assert env.schedule.time == 13
assert times_started == [0, 11]
assert times_awakened == [10]
assert done == [10]
# Once more to yield the cond, another one to continue
assert env.schedule.steps == 7
assert len(times) == 7