J. Fernando Sánchez 2 years ago
parent 0a9c6d8b19
commit f811ee18c5

@ -3,13 +3,14 @@ All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html). The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## [UNRELEASED] ## [0.3 UNRELEASED]
### Changed ### Changed
* Configuration schema is very different now. Check `soil.config` for more information. We are using Pydantic for (de)serialization. * Configuration schema is very different now. Check `soil.config` for more information. We are also using Pydantic for (de)serialization.
* There may be more than one topology/network in the simulation * There may be more than one topology/network in the simulation
* Agents are split into groups now. Each group may be assigned a given set of agents or an agent distribution, and a network topology to be assigned to. * Agents are split into groups now. Each group may be assigned a given set of agents or an agent distribution, and a network topology to be assigned to.
### Removed ### Removed
* Any `tsih` and `History` integration in the main classes. To record the state of environments/agents, just use a datacollector. In some cases this may be slower or consume more memory than the previous system. However, few cases actually used the full potential of the history, and it came at the cost of unnecessary complexity and worse performance for the majority of cases. * Any `tsih` and `History` integration in the main classes. To record the state of environments/agents, just use a datacollector. In some cases this may be slower or consume more memory than the previous system. However, few cases actually used the full potential of the history, and it came at the cost of unnecessary complexity and worse performance for the majority of cases.
## [0.20.7] ## [0.20.7]
### Changed ### Changed
* Creating a `time.When` from another `time.When` does not nest them anymore (it returns the argument) * Creating a `time.When` from another `time.When` does not nest them anymore (it returns the argument)

@ -5,6 +5,42 @@ Learn how to run your own simulations with our [documentation](http://soilsim.re
Follow our [tutorial](examples/tutorial/soil_tutorial.ipynb) to develop your own agent models. Follow our [tutorial](examples/tutorial/soil_tutorial.ipynb) to develop your own agent models.
# Changes in version 0.3
Version 0.3 came packed with many changes to provide much better integration with MESA.
For a long time, we tried to keep soil backwards-compatible, but it turned out to be a big endeavour and the resulting code was less readable.
This translates to harder maintenance and a worse experience for newcomers.
In the end, we decided to make some breaking changes.
If you have an older Soil simulation, you have two options:
* Update the necessary configuration files and code. You may use the examples in the `examples` folder for reference, as well as the documentation.
* Keep using a previous `soil` version.
## Mesa compatibility
Soil is in the process of becoming fully compatible with MESA.
The idea is to provide a set of modular classes and functions that extend the functionality of mesa, whilst staying compatible.
In the end, it should be possible to add regular mesa agents to a soil simulation, or use a soil agent within a mesa simulation/model.
This is a non-exhaustive list of tasks to achieve compatibility:
- [ ] Integrate `soil.Simulation` with mesa's runners:
- [ ] `soil.Simulation` could mimic/become a `mesa.batchrunner`
- [ ] Integrate `soil.Environment` with `mesa.Model`:
- [x] `Soil.Environment` inherits from `mesa.Model`
- [x] `Soil.Environment` includes a Mesa-like Scheduler (see the `soil.time` module.
- [ ] Allow for `mesa.Model` to be used in a simulation.
- [ ] Integrate `soil.Agent` with `mesa.Agent`:
- [x] Rename agent.id to unique_id?
- [x] mesa agents can be used in soil simulations (see `examples/mesa`)
- [ ] Provide examples
- [ ] Using mesa modules in a soil simulation
- [ ] Using soil modules in a mesa simulation
- [ ] Document the new APIs and usage
## Citation ## Citation
@ -31,25 +67,6 @@ If you use Soil in your research, don't forget to cite this paper:
``` ```
## Mesa compatibility
Soil is in the process of becoming fully compatible with MESA.
As of this writing,
This is a non-exhaustive list of tasks to achieve compatibility:
* Environments.agents and mesa.Agent.agents are not the same. env is a property, and it only takes into account network and environment agents. Might rename environment_agents to other_agents or sth like that
- [ ] Integrate `soil.Simulation` with mesa's runners:
- [ ] `soil.Simulation` could mimic/become a `mesa.batchrunner`
- [ ] Integrate `soil.Environment` with `mesa.Model`:
- [x] `Soil.Environment` inherits from `mesa.Model`
- [x] `Soil.Environment` includes a Mesa-like Scheduler (see the `soil.time` module.
- [ ] Integrate `soil.Agent` with `mesa.Agent`:
- [x] Rename agent.id to unique_id?
- [x] mesa agents can be used in soil simulations (see `examples/mesa`)
- [ ] Document the new APIs and usage
@Copyright GSI - Universidad Politécnica de Madrid 2017-2021 @Copyright GSI - Universidad Politécnica de Madrid 2017-2021
[![SOIL](logo_gsi.png)](https://www.gsi.upm.es) [![SOIL](logo_gsi.png)](https://www.gsi.upm.es)

@ -13,7 +13,7 @@ Here's an example (``example.yml``).
This example configuration will run three trials (``num_trials``) of a simulation containing a randomly generated network (``network_params``). This example configuration will run three trials (``num_trials``) of a simulation containing a randomly generated network (``network_params``).
The 100 nodes in the network will be SISaModel agents (``network_agents.agent_type``), which is an agent behavior that is included in Soil. The 100 nodes in the network will be SISaModel agents (``network_agents.agent_class``), which is an agent behavior that is included in Soil.
10% of the agents (``weight=1``) will start in the content state, 10% in the discontent state, and the remaining 80% (``weight=8``) in the neutral state. 10% of the agents (``weight=1``) will start in the content state, 10% in the discontent state, and the remaining 80% (``weight=8``) in the neutral state.
All agents will have access to the environment (``environment_params``), which only contains one variable, ``prob_infected``. All agents will have access to the environment (``environment_params``), which only contains one variable, ``prob_infected``.
The state of the agents will be updated every 2 seconds (``interval``). The state of the agents will be updated every 2 seconds (``interval``).
@ -116,7 +116,7 @@ Agents
====== ======
Agents are a way of modelling behavior. Agents are a way of modelling behavior.
Agents can be characterized with two variables: agent type (``agent_type``) and state. Agents can be characterized with two variables: agent type (``agent_class``) and state.
The agent type is a ``soil.Agent`` class, which contains the code that encapsulates the behavior of the agent. The agent type is a ``soil.Agent`` class, which contains the code that encapsulates the behavior of the agent.
The state is a set of variables, which may change during the simulation, and that the code may use to control the behavior. The state is a set of variables, which may change during the simulation, and that the code may use to control the behavior.
All agents provide a ``step`` method either explicitly or implicitly (by inheriting it from a superclass), which controls how the agent will behave in each step of the simulation. All agents provide a ``step`` method either explicitly or implicitly (by inheriting it from a superclass), which controls how the agent will behave in each step of the simulation.
@ -142,7 +142,7 @@ Hence, every node in the network will be associated to an agent of that type.
.. code:: yaml .. code:: yaml
agent_type: SISaModel agent_class: SISaModel
It is also possible to add more than one type of agent to the simulation. It is also possible to add more than one type of agent to the simulation.
@ -152,9 +152,9 @@ For instance, with following configuration, it is five times more likely for a n
.. code:: yaml .. code:: yaml
network_agents: network_agents:
- agent_type: SISaModel - agent_class: SISaModel
weight: 1 weight: 1
- agent_type: CounterModel - agent_class: CounterModel
weight: 5 weight: 5
The third option is to specify the type of agent on the node itself, e.g.: The third option is to specify the type of agent on the node itself, e.g.:
@ -165,10 +165,10 @@ The third option is to specify the type of agent on the node itself, e.g.:
topology: topology:
nodes: nodes:
- id: first - id: first
agent_type: BaseAgent agent_class: BaseAgent
states: states:
first: first:
agent_type: SISaModel agent_class: SISaModel
This would also work with a randomly generated network: This would also work with a randomly generated network:
@ -179,9 +179,9 @@ This would also work with a randomly generated network:
network: network:
generator: complete generator: complete
n: 5 n: 5
agent_type: BaseAgent agent_class: BaseAgent
states: states:
- agent_type: SISaModel - agent_class: SISaModel
@ -192,11 +192,11 @@ e.g., to populate the network with SISaModel, roughly 10% of them with a discont
.. code:: yaml .. code:: yaml
network_agents: network_agents:
- agent_type: SISaModel - agent_class: SISaModel
weight: 9 weight: 9
state: state:
id: neutral id: neutral
- agent_type: SISaModel - agent_class: SISaModel
weight: 1 weight: 1
state: state:
id: discontent id: discontent
@ -206,7 +206,7 @@ For instance, to add a state for the two nodes in this configuration:
.. code:: yaml .. code:: yaml
agent_type: SISaModel agent_class: SISaModel
network: network:
generator: complete_graph generator: complete_graph
n: 2 n: 2
@ -231,10 +231,10 @@ These agents are programmed in much the same way as network agents, the only dif
.. code:: .. code::
environment_agents: environment_agents:
- agent_type: MyAgent - agent_class: MyAgent
state: state:
mood: happy mood: happy
- agent_type: DummyAgent - agent_class: DummyAgent
You may use environment agents to model events that a normal agent cannot control, such as natural disasters or chance. You may use environment agents to model events that a normal agent cannot control, such as natural disasters or chance.

@ -8,15 +8,15 @@ network_params:
n: 100 n: 100
m: 2 m: 2
network_agents: network_agents:
- agent_type: SISaModel - agent_class: SISaModel
weight: 1 weight: 1
state: state:
id: content id: content
- agent_type: SISaModel - agent_class: SISaModel
weight: 1 weight: 1
state: state:
id: discontent id: discontent
- agent_type: SISaModel - agent_class: SISaModel
weight: 8 weight: 8
state: state:
id: neutral id: neutral

@ -3,11 +3,11 @@ name: quickstart
num_trials: 1 num_trials: 1
max_time: 1000 max_time: 1000
network_agents: network_agents:
- agent_type: SISaModel - agent_class: SISaModel
state: state:
id: neutral id: neutral
weight: 1 weight: 1
- agent_type: SISaModel - agent_class: SISaModel
state: state:
id: content id: content
weight: 2 weight: 2

@ -211,11 +211,11 @@ nodes in that network. Notice how node 0 is the only one with a TV.
sim = soil.Simulation(topology=G, sim = soil.Simulation(topology=G,
num_trials=1, num_trials=1,
max_time=MAX_TIME, max_time=MAX_TIME,
environment_agents=[{'agent_type': NewsEnvironmentAgent, environment_agents=[{'agent_class': NewsEnvironmentAgent,
'state': { 'state': {
'event_time': EVENT_TIME 'event_time': EVENT_TIME
}}], }}],
network_agents=[{'agent_type': NewsSpread, network_agents=[{'agent_class': NewsSpread,
'weight': 1}], 'weight': 1}],
states={0: {'has_tv': True}}, states={0: {'has_tv': True}},
default_state={'has_tv': False}, default_state={'has_tv': False},
@ -285,14 +285,14 @@ For this demo, we will use a python dictionary:
}, },
'network_agents': [ 'network_agents': [
{ {
'agent_type': NewsSpread, 'agent_class': NewsSpread,
'weight': 1, 'weight': 1,
'state': { 'state': {
'has_tv': False 'has_tv': False
} }
}, },
{ {
'agent_type': NewsSpread, 'agent_class': NewsSpread,
'weight': 2, 'weight': 2,
'state': { 'state': {
'has_tv': True 'has_tv': True
@ -300,7 +300,7 @@ For this demo, we will use a python dictionary:
} }
], ],
'environment_agents':[ 'environment_agents':[
{'agent_type': NewsEnvironmentAgent, {'agent_class': NewsEnvironmentAgent,
'state': { 'state': {
'event_time': 10 'event_time': 10
} }

@ -98,11 +98,11 @@
"max_time: 30\r\n", "max_time: 30\r\n",
"name: Sim_all_dumb\r\n", "name: Sim_all_dumb\r\n",
"network_agents:\r\n", "network_agents:\r\n",
"- agent_type: DumbViewer\r\n", "- agent_class: DumbViewer\r\n",
" state:\r\n", " state:\r\n",
" has_tv: false\r\n", " has_tv: false\r\n",
" weight: 1\r\n", " weight: 1\r\n",
"- agent_type: DumbViewer\r\n", "- agent_class: DumbViewer\r\n",
" state:\r\n", " state:\r\n",
" has_tv: true\r\n", " has_tv: true\r\n",
" weight: 1\r\n", " weight: 1\r\n",
@ -122,19 +122,19 @@
"max_time: 30\r\n", "max_time: 30\r\n",
"name: Sim_half_herd\r\n", "name: Sim_half_herd\r\n",
"network_agents:\r\n", "network_agents:\r\n",
"- agent_type: DumbViewer\r\n", "- agent_class: DumbViewer\r\n",
" state:\r\n", " state:\r\n",
" has_tv: false\r\n", " has_tv: false\r\n",
" weight: 1\r\n", " weight: 1\r\n",
"- agent_type: DumbViewer\r\n", "- agent_class: DumbViewer\r\n",
" state:\r\n", " state:\r\n",
" has_tv: true\r\n", " has_tv: true\r\n",
" weight: 1\r\n", " weight: 1\r\n",
"- agent_type: HerdViewer\r\n", "- agent_class: HerdViewer\r\n",
" state:\r\n", " state:\r\n",
" has_tv: false\r\n", " has_tv: false\r\n",
" weight: 1\r\n", " weight: 1\r\n",
"- agent_type: HerdViewer\r\n", "- agent_class: HerdViewer\r\n",
" state:\r\n", " state:\r\n",
" has_tv: true\r\n", " has_tv: true\r\n",
" weight: 1\r\n", " weight: 1\r\n",
@ -154,12 +154,12 @@
"max_time: 30\r\n", "max_time: 30\r\n",
"name: Sim_all_herd\r\n", "name: Sim_all_herd\r\n",
"network_agents:\r\n", "network_agents:\r\n",
"- agent_type: HerdViewer\r\n", "- agent_class: HerdViewer\r\n",
" state:\r\n", " state:\r\n",
" has_tv: true\r\n", " has_tv: true\r\n",
" id: neutral\r\n", " id: neutral\r\n",
" weight: 1\r\n", " weight: 1\r\n",
"- agent_type: HerdViewer\r\n", "- agent_class: HerdViewer\r\n",
" state:\r\n", " state:\r\n",
" has_tv: true\r\n", " has_tv: true\r\n",
" id: neutral\r\n", " id: neutral\r\n",
@ -181,12 +181,12 @@
"max_time: 30\r\n", "max_time: 30\r\n",
"name: Sim_wise_herd\r\n", "name: Sim_wise_herd\r\n",
"network_agents:\r\n", "network_agents:\r\n",
"- agent_type: HerdViewer\r\n", "- agent_class: HerdViewer\r\n",
" state:\r\n", " state:\r\n",
" has_tv: true\r\n", " has_tv: true\r\n",
" id: neutral\r\n", " id: neutral\r\n",
" weight: 1\r\n", " weight: 1\r\n",
"- agent_type: WiseViewer\r\n", "- agent_class: WiseViewer\r\n",
" state:\r\n", " state:\r\n",
" has_tv: true\r\n", " has_tv: true\r\n",
" weight: 1\r\n", " weight: 1\r\n",
@ -207,12 +207,12 @@
"max_time: 30\r\n", "max_time: 30\r\n",
"name: Sim_all_wise\r\n", "name: Sim_all_wise\r\n",
"network_agents:\r\n", "network_agents:\r\n",
"- agent_type: WiseViewer\r\n", "- agent_class: WiseViewer\r\n",
" state:\r\n", " state:\r\n",
" has_tv: true\r\n", " has_tv: true\r\n",
" id: neutral\r\n", " id: neutral\r\n",
" weight: 1\r\n", " weight: 1\r\n",
"- agent_type: WiseViewer\r\n", "- agent_class: WiseViewer\r\n",
" state:\r\n", " state:\r\n",
" has_tv: true\r\n", " has_tv: true\r\n",
" weight: 1\r\n", " weight: 1\r\n",

@ -141,10 +141,10 @@
" 'load_module': 'newsspread',\n", " 'load_module': 'newsspread',\n",
" 'max_time': 30,\n", " 'max_time': 30,\n",
" 'name': 'Sim_all_dumb',\n", " 'name': 'Sim_all_dumb',\n",
" 'network_agents': [{'agent_type': 'DumbViewer',\n", " 'network_agents': [{'agent_class': 'DumbViewer',\n",
" 'state': {'has_tv': False},\n", " 'state': {'has_tv': False},\n",
" 'weight': 1},\n", " 'weight': 1},\n",
" {'agent_type': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n", " {'agent_class': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n",
" 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n", " 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n",
" 'num_trials': 50,\n", " 'num_trials': 50,\n",
" 'seed': 'None',\n", " 'seed': 'None',\n",
@ -1758,10 +1758,10 @@
" 'load_module': 'newsspread',\n", " 'load_module': 'newsspread',\n",
" 'max_time': 30,\n", " 'max_time': 30,\n",
" 'name': 'Sim_all_dumb',\n", " 'name': 'Sim_all_dumb',\n",
" 'network_agents': [{'agent_type': 'DumbViewer',\n", " 'network_agents': [{'agent_class': 'DumbViewer',\n",
" 'state': {'has_tv': False},\n", " 'state': {'has_tv': False},\n",
" 'weight': 1},\n", " 'weight': 1},\n",
" {'agent_type': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n", " {'agent_class': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n",
" 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n", " 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n",
" 'num_trials': 50,\n", " 'num_trials': 50,\n",
" 'seed': 'None',\n", " 'seed': 'None',\n",
@ -3363,10 +3363,10 @@
" 'load_module': 'newsspread',\n", " 'load_module': 'newsspread',\n",
" 'max_time': 30,\n", " 'max_time': 30,\n",
" 'name': 'Sim_all_dumb',\n", " 'name': 'Sim_all_dumb',\n",
" 'network_agents': [{'agent_type': 'DumbViewer',\n", " 'network_agents': [{'agent_class': 'DumbViewer',\n",
" 'state': {'has_tv': False},\n", " 'state': {'has_tv': False},\n",
" 'weight': 1},\n", " 'weight': 1},\n",
" {'agent_type': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n", " {'agent_class': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n",
" 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n", " 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n",
" 'num_trials': 50,\n", " 'num_trials': 50,\n",
" 'seed': 'None',\n", " 'seed': 'None',\n",
@ -4977,10 +4977,10 @@
" 'load_module': 'newsspread',\n", " 'load_module': 'newsspread',\n",
" 'max_time': 30,\n", " 'max_time': 30,\n",
" 'name': 'Sim_all_dumb',\n", " 'name': 'Sim_all_dumb',\n",
" 'network_agents': [{'agent_type': 'DumbViewer',\n", " 'network_agents': [{'agent_class': 'DumbViewer',\n",
" 'state': {'has_tv': False},\n", " 'state': {'has_tv': False},\n",
" 'weight': 1},\n", " 'weight': 1},\n",
" {'agent_type': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n", " {'agent_class': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n",
" 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n", " 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n",
" 'num_trials': 50,\n", " 'num_trials': 50,\n",
" 'seed': 'None',\n", " 'seed': 'None',\n",
@ -6591,10 +6591,10 @@
" 'load_module': 'newsspread',\n", " 'load_module': 'newsspread',\n",
" 'max_time': 30,\n", " 'max_time': 30,\n",
" 'name': 'Sim_all_dumb',\n", " 'name': 'Sim_all_dumb',\n",
" 'network_agents': [{'agent_type': 'DumbViewer',\n", " 'network_agents': [{'agent_class': 'DumbViewer',\n",
" 'state': {'has_tv': False},\n", " 'state': {'has_tv': False},\n",
" 'weight': 1},\n", " 'weight': 1},\n",
" {'agent_type': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n", " {'agent_class': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n",
" 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n", " 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n",
" 'num_trials': 50,\n", " 'num_trials': 50,\n",
" 'seed': 'None',\n", " 'seed': 'None',\n",
@ -8211,10 +8211,10 @@
" 'load_module': 'newsspread',\n", " 'load_module': 'newsspread',\n",
" 'max_time': 30,\n", " 'max_time': 30,\n",
" 'name': 'Sim_all_dumb',\n", " 'name': 'Sim_all_dumb',\n",
" 'network_agents': [{'agent_type': 'DumbViewer',\n", " 'network_agents': [{'agent_class': 'DumbViewer',\n",
" 'state': {'has_tv': False},\n", " 'state': {'has_tv': False},\n",
" 'weight': 1},\n", " 'weight': 1},\n",
" {'agent_type': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n", " {'agent_class': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n",
" 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n", " 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n",
" 'num_trials': 50,\n", " 'num_trials': 50,\n",
" 'seed': 'None',\n", " 'seed': 'None',\n",
@ -9828,10 +9828,10 @@
" 'load_module': 'newsspread',\n", " 'load_module': 'newsspread',\n",
" 'max_time': 30,\n", " 'max_time': 30,\n",
" 'name': 'Sim_all_dumb',\n", " 'name': 'Sim_all_dumb',\n",
" 'network_agents': [{'agent_type': 'DumbViewer',\n", " 'network_agents': [{'agent_class': 'DumbViewer',\n",
" 'state': {'has_tv': False},\n", " 'state': {'has_tv': False},\n",
" 'weight': 1},\n", " 'weight': 1},\n",
" {'agent_type': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n", " {'agent_class': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n",
" 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n", " 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n",
" 'num_trials': 50,\n", " 'num_trials': 50,\n",
" 'seed': 'None',\n", " 'seed': 'None',\n",
@ -11448,10 +11448,10 @@
" 'load_module': 'newsspread',\n", " 'load_module': 'newsspread',\n",
" 'max_time': 30,\n", " 'max_time': 30,\n",
" 'name': 'Sim_all_dumb',\n", " 'name': 'Sim_all_dumb',\n",
" 'network_agents': [{'agent_type': 'DumbViewer',\n", " 'network_agents': [{'agent_class': 'DumbViewer',\n",
" 'state': {'has_tv': False},\n", " 'state': {'has_tv': False},\n",
" 'weight': 1},\n", " 'weight': 1},\n",
" {'agent_type': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n", " {'agent_class': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n",
" 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n", " 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n",
" 'num_trials': 50,\n", " 'num_trials': 50,\n",
" 'seed': 'None',\n", " 'seed': 'None',\n",
@ -13062,10 +13062,10 @@
" 'load_module': 'newsspread',\n", " 'load_module': 'newsspread',\n",
" 'max_time': 30,\n", " 'max_time': 30,\n",
" 'name': 'Sim_all_dumb',\n", " 'name': 'Sim_all_dumb',\n",
" 'network_agents': [{'agent_type': 'DumbViewer',\n", " 'network_agents': [{'agent_class': 'DumbViewer',\n",
" 'state': {'has_tv': False},\n", " 'state': {'has_tv': False},\n",
" 'weight': 1},\n", " 'weight': 1},\n",
" {'agent_type': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n", " {'agent_class': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n",
" 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n", " 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n",
" 'num_trials': 50,\n", " 'num_trials': 50,\n",
" 'seed': 'None',\n", " 'seed': 'None',\n",
@ -14679,10 +14679,10 @@
" 'load_module': 'newsspread',\n", " 'load_module': 'newsspread',\n",
" 'max_time': 30,\n", " 'max_time': 30,\n",
" 'name': 'Sim_all_dumb',\n", " 'name': 'Sim_all_dumb',\n",
" 'network_agents': [{'agent_type': 'DumbViewer',\n", " 'network_agents': [{'agent_class': 'DumbViewer',\n",
" 'state': {'has_tv': False},\n", " 'state': {'has_tv': False},\n",
" 'weight': 1},\n", " 'weight': 1},\n",
" {'agent_type': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n", " {'agent_class': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n",
" 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n", " 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n",
" 'num_trials': 50,\n", " 'num_trials': 50,\n",
" 'seed': 'None',\n", " 'seed': 'None',\n",
@ -16296,10 +16296,10 @@
" 'load_module': 'newsspread',\n", " 'load_module': 'newsspread',\n",
" 'max_time': 30,\n", " 'max_time': 30,\n",
" 'name': 'Sim_all_dumb',\n", " 'name': 'Sim_all_dumb',\n",
" 'network_agents': [{'agent_type': 'DumbViewer',\n", " 'network_agents': [{'agent_class': 'DumbViewer',\n",
" 'state': {'has_tv': False},\n", " 'state': {'has_tv': False},\n",
" 'weight': 1},\n", " 'weight': 1},\n",
" {'agent_type': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n", " {'agent_class': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n",
" 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n", " 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n",
" 'num_trials': 50,\n", " 'num_trials': 50,\n",
" 'seed': 'None',\n", " 'seed': 'None',\n",
@ -17916,10 +17916,10 @@
" 'load_module': 'newsspread',\n", " 'load_module': 'newsspread',\n",
" 'max_time': 30,\n", " 'max_time': 30,\n",
" 'name': 'Sim_all_dumb',\n", " 'name': 'Sim_all_dumb',\n",
" 'network_agents': [{'agent_type': 'DumbViewer',\n", " 'network_agents': [{'agent_class': 'DumbViewer',\n",
" 'state': {'has_tv': False},\n", " 'state': {'has_tv': False},\n",
" 'weight': 1},\n", " 'weight': 1},\n",
" {'agent_type': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n", " {'agent_class': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n",
" 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n", " 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n",
" 'num_trials': 50,\n", " 'num_trials': 50,\n",
" 'seed': 'None',\n", " 'seed': 'None',\n",
@ -19521,10 +19521,10 @@
" 'load_module': 'newsspread',\n", " 'load_module': 'newsspread',\n",
" 'max_time': 30,\n", " 'max_time': 30,\n",
" 'name': 'Sim_all_dumb',\n", " 'name': 'Sim_all_dumb',\n",
" 'network_agents': [{'agent_type': 'DumbViewer',\n", " 'network_agents': [{'agent_class': 'DumbViewer',\n",
" 'state': {'has_tv': False},\n", " 'state': {'has_tv': False},\n",
" 'weight': 1},\n", " 'weight': 1},\n",
" {'agent_type': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n", " {'agent_class': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n",
" 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n", " 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n",
" 'num_trials': 50,\n", " 'num_trials': 50,\n",
" 'seed': 'None',\n", " 'seed': 'None',\n",
@ -21144,10 +21144,10 @@
" 'load_module': 'newsspread',\n", " 'load_module': 'newsspread',\n",
" 'max_time': 30,\n", " 'max_time': 30,\n",
" 'name': 'Sim_all_dumb',\n", " 'name': 'Sim_all_dumb',\n",
" 'network_agents': [{'agent_type': 'DumbViewer',\n", " 'network_agents': [{'agent_class': 'DumbViewer',\n",
" 'state': {'has_tv': False},\n", " 'state': {'has_tv': False},\n",
" 'weight': 1},\n", " 'weight': 1},\n",
" {'agent_type': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n", " {'agent_class': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n",
" 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n", " 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n",
" 'num_trials': 50,\n", " 'num_trials': 50,\n",
" 'seed': 'None',\n", " 'seed': 'None',\n",
@ -22767,10 +22767,10 @@
" 'load_module': 'newsspread',\n", " 'load_module': 'newsspread',\n",
" 'max_time': 30,\n", " 'max_time': 30,\n",
" 'name': 'Sim_all_dumb',\n", " 'name': 'Sim_all_dumb',\n",
" 'network_agents': [{'agent_type': 'DumbViewer',\n", " 'network_agents': [{'agent_class': 'DumbViewer',\n",
" 'state': {'has_tv': False},\n", " 'state': {'has_tv': False},\n",
" 'weight': 1},\n", " 'weight': 1},\n",
" {'agent_type': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n", " {'agent_class': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n",
" 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n", " 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n",
" 'num_trials': 50,\n", " 'num_trials': 50,\n",
" 'seed': 'None',\n", " 'seed': 'None',\n",
@ -24375,10 +24375,10 @@
" 'load_module': 'newsspread',\n", " 'load_module': 'newsspread',\n",
" 'max_time': 30,\n", " 'max_time': 30,\n",
" 'name': 'Sim_all_dumb',\n", " 'name': 'Sim_all_dumb',\n",
" 'network_agents': [{'agent_type': 'DumbViewer',\n", " 'network_agents': [{'agent_class': 'DumbViewer',\n",
" 'state': {'has_tv': False},\n", " 'state': {'has_tv': False},\n",
" 'weight': 1},\n", " 'weight': 1},\n",
" {'agent_type': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n", " {'agent_class': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n",
" 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n", " 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n",
" 'num_trials': 50,\n", " 'num_trials': 50,\n",
" 'seed': 'None',\n", " 'seed': 'None',\n",
@ -25992,10 +25992,10 @@
" 'load_module': 'newsspread',\n", " 'load_module': 'newsspread',\n",
" 'max_time': 30,\n", " 'max_time': 30,\n",
" 'name': 'Sim_all_dumb',\n", " 'name': 'Sim_all_dumb',\n",
" 'network_agents': [{'agent_type': 'DumbViewer',\n", " 'network_agents': [{'agent_class': 'DumbViewer',\n",
" 'state': {'has_tv': False},\n", " 'state': {'has_tv': False},\n",
" 'weight': 1},\n", " 'weight': 1},\n",
" {'agent_type': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n", " {'agent_class': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n",
" 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n", " 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n",
" 'num_trials': 50,\n", " 'num_trials': 50,\n",
" 'seed': 'None',\n", " 'seed': 'None',\n",
@ -27603,10 +27603,10 @@
" 'load_module': 'newsspread',\n", " 'load_module': 'newsspread',\n",
" 'max_time': 30,\n", " 'max_time': 30,\n",
" 'name': 'Sim_all_dumb',\n", " 'name': 'Sim_all_dumb',\n",
" 'network_agents': [{'agent_type': 'DumbViewer',\n", " 'network_agents': [{'agent_class': 'DumbViewer',\n",
" 'state': {'has_tv': False},\n", " 'state': {'has_tv': False},\n",
" 'weight': 1},\n", " 'weight': 1},\n",
" {'agent_type': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n", " {'agent_class': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n",
" 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n", " 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n",
" 'num_trials': 50,\n", " 'num_trials': 50,\n",
" 'seed': 'None',\n", " 'seed': 'None',\n",
@ -29220,10 +29220,10 @@
" 'load_module': 'newsspread',\n", " 'load_module': 'newsspread',\n",
" 'max_time': 30,\n", " 'max_time': 30,\n",
" 'name': 'Sim_all_dumb',\n", " 'name': 'Sim_all_dumb',\n",
" 'network_agents': [{'agent_type': 'DumbViewer',\n", " 'network_agents': [{'agent_class': 'DumbViewer',\n",
" 'state': {'has_tv': False},\n", " 'state': {'has_tv': False},\n",
" 'weight': 1},\n", " 'weight': 1},\n",
" {'agent_type': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n", " {'agent_class': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n",
" 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n", " 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n",
" 'num_trials': 50,\n", " 'num_trials': 50,\n",
" 'seed': 'None',\n", " 'seed': 'None',\n",
@ -30819,10 +30819,10 @@
" 'load_module': 'newsspread',\n", " 'load_module': 'newsspread',\n",
" 'max_time': 30,\n", " 'max_time': 30,\n",
" 'name': 'Sim_all_dumb',\n", " 'name': 'Sim_all_dumb',\n",
" 'network_agents': [{'agent_type': 'DumbViewer',\n", " 'network_agents': [{'agent_class': 'DumbViewer',\n",
" 'state': {'has_tv': False},\n", " 'state': {'has_tv': False},\n",
" 'weight': 1},\n", " 'weight': 1},\n",
" {'agent_type': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n", " {'agent_class': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n",
" 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n", " 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n",
" 'num_trials': 50,\n", " 'num_trials': 50,\n",
" 'seed': 'None',\n", " 'seed': 'None',\n",
@ -32439,10 +32439,10 @@
" 'load_module': 'newsspread',\n", " 'load_module': 'newsspread',\n",
" 'max_time': 30,\n", " 'max_time': 30,\n",
" 'name': 'Sim_all_dumb',\n", " 'name': 'Sim_all_dumb',\n",
" 'network_agents': [{'agent_type': 'DumbViewer',\n", " 'network_agents': [{'agent_class': 'DumbViewer',\n",
" 'state': {'has_tv': False},\n", " 'state': {'has_tv': False},\n",
" 'weight': 1},\n", " 'weight': 1},\n",
" {'agent_type': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n", " {'agent_class': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n",
" 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n", " 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n",
" 'num_trials': 50,\n", " 'num_trials': 50,\n",
" 'seed': 'None',\n", " 'seed': 'None',\n",
@ -34056,10 +34056,10 @@
" 'load_module': 'newsspread',\n", " 'load_module': 'newsspread',\n",
" 'max_time': 30,\n", " 'max_time': 30,\n",
" 'name': 'Sim_all_dumb',\n", " 'name': 'Sim_all_dumb',\n",
" 'network_agents': [{'agent_type': 'DumbViewer',\n", " 'network_agents': [{'agent_class': 'DumbViewer',\n",
" 'state': {'has_tv': False},\n", " 'state': {'has_tv': False},\n",
" 'weight': 1},\n", " 'weight': 1},\n",
" {'agent_type': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n", " {'agent_class': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n",
" 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n", " 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n",
" 'num_trials': 50,\n", " 'num_trials': 50,\n",
" 'seed': 'None',\n", " 'seed': 'None',\n",
@ -35676,10 +35676,10 @@
" 'load_module': 'newsspread',\n", " 'load_module': 'newsspread',\n",
" 'max_time': 30,\n", " 'max_time': 30,\n",
" 'name': 'Sim_all_dumb',\n", " 'name': 'Sim_all_dumb',\n",
" 'network_agents': [{'agent_type': 'DumbViewer',\n", " 'network_agents': [{'agent_class': 'DumbViewer',\n",
" 'state': {'has_tv': False},\n", " 'state': {'has_tv': False},\n",
" 'weight': 1},\n", " 'weight': 1},\n",
" {'agent_type': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n", " {'agent_class': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n",
" 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n", " 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n",
" 'num_trials': 50,\n", " 'num_trials': 50,\n",
" 'seed': 'None',\n", " 'seed': 'None',\n",
@ -37293,10 +37293,10 @@
" 'load_module': 'newsspread',\n", " 'load_module': 'newsspread',\n",
" 'max_time': 30,\n", " 'max_time': 30,\n",
" 'name': 'Sim_all_dumb',\n", " 'name': 'Sim_all_dumb',\n",
" 'network_agents': [{'agent_type': 'DumbViewer',\n", " 'network_agents': [{'agent_class': 'DumbViewer',\n",
" 'state': {'has_tv': False},\n", " 'state': {'has_tv': False},\n",
" 'weight': 1},\n", " 'weight': 1},\n",
" {'agent_type': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n", " {'agent_class': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n",
" 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n", " 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n",
" 'num_trials': 50,\n", " 'num_trials': 50,\n",
" 'seed': 'None',\n", " 'seed': 'None',\n",
@ -38913,10 +38913,10 @@
" 'load_module': 'newsspread',\n", " 'load_module': 'newsspread',\n",
" 'max_time': 30,\n", " 'max_time': 30,\n",
" 'name': 'Sim_all_dumb',\n", " 'name': 'Sim_all_dumb',\n",
" 'network_agents': [{'agent_type': 'DumbViewer',\n", " 'network_agents': [{'agent_class': 'DumbViewer',\n",
" 'state': {'has_tv': False},\n", " 'state': {'has_tv': False},\n",
" 'weight': 1},\n", " 'weight': 1},\n",
" {'agent_type': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n", " {'agent_class': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n",
" 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n", " 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n",
" 'num_trials': 50,\n", " 'num_trials': 50,\n",
" 'seed': 'None',\n", " 'seed': 'None',\n",
@ -40518,10 +40518,10 @@
" 'load_module': 'newsspread',\n", " 'load_module': 'newsspread',\n",
" 'max_time': 30,\n", " 'max_time': 30,\n",
" 'name': 'Sim_all_dumb',\n", " 'name': 'Sim_all_dumb',\n",
" 'network_agents': [{'agent_type': 'DumbViewer',\n", " 'network_agents': [{'agent_class': 'DumbViewer',\n",
" 'state': {'has_tv': False},\n", " 'state': {'has_tv': False},\n",
" 'weight': 1},\n", " 'weight': 1},\n",
" {'agent_type': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n", " {'agent_class': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n",
" 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n", " 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n",
" 'num_trials': 50,\n", " 'num_trials': 50,\n",
" 'seed': 'None',\n", " 'seed': 'None',\n",
@ -42129,10 +42129,10 @@
" 'load_module': 'newsspread',\n", " 'load_module': 'newsspread',\n",
" 'max_time': 30,\n", " 'max_time': 30,\n",
" 'name': 'Sim_all_dumb',\n", " 'name': 'Sim_all_dumb',\n",
" 'network_agents': [{'agent_type': 'DumbViewer',\n", " 'network_agents': [{'agent_class': 'DumbViewer',\n",
" 'state': {'has_tv': False},\n", " 'state': {'has_tv': False},\n",
" 'weight': 1},\n", " 'weight': 1},\n",
" {'agent_type': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n", " {'agent_class': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n",
" 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n", " 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n",
" 'num_trials': 50,\n", " 'num_trials': 50,\n",
" 'seed': 'None',\n", " 'seed': 'None',\n",
@ -43746,10 +43746,10 @@
" 'load_module': 'newsspread',\n", " 'load_module': 'newsspread',\n",
" 'max_time': 30,\n", " 'max_time': 30,\n",
" 'name': 'Sim_all_dumb',\n", " 'name': 'Sim_all_dumb',\n",
" 'network_agents': [{'agent_type': 'DumbViewer',\n", " 'network_agents': [{'agent_class': 'DumbViewer',\n",
" 'state': {'has_tv': False},\n", " 'state': {'has_tv': False},\n",
" 'weight': 1},\n", " 'weight': 1},\n",
" {'agent_type': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n", " {'agent_class': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n",
" 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n", " 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n",
" 'num_trials': 50,\n", " 'num_trials': 50,\n",
" 'seed': 'None',\n", " 'seed': 'None',\n",
@ -45357,10 +45357,10 @@
" 'load_module': 'newsspread',\n", " 'load_module': 'newsspread',\n",
" 'max_time': 30,\n", " 'max_time': 30,\n",
" 'name': 'Sim_all_dumb',\n", " 'name': 'Sim_all_dumb',\n",
" 'network_agents': [{'agent_type': 'DumbViewer',\n", " 'network_agents': [{'agent_class': 'DumbViewer',\n",
" 'state': {'has_tv': False},\n", " 'state': {'has_tv': False},\n",
" 'weight': 1},\n", " 'weight': 1},\n",
" {'agent_type': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n", " {'agent_class': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n",
" 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n", " 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n",
" 'num_trials': 50,\n", " 'num_trials': 50,\n",
" 'seed': 'None',\n", " 'seed': 'None',\n",
@ -46974,10 +46974,10 @@
" 'load_module': 'newsspread',\n", " 'load_module': 'newsspread',\n",
" 'max_time': 30,\n", " 'max_time': 30,\n",
" 'name': 'Sim_all_dumb',\n", " 'name': 'Sim_all_dumb',\n",
" 'network_agents': [{'agent_type': 'DumbViewer',\n", " 'network_agents': [{'agent_class': 'DumbViewer',\n",
" 'state': {'has_tv': False},\n", " 'state': {'has_tv': False},\n",
" 'weight': 1},\n", " 'weight': 1},\n",
" {'agent_type': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n", " {'agent_class': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n",
" 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n", " 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n",
" 'num_trials': 50,\n", " 'num_trials': 50,\n",
" 'seed': 'None',\n", " 'seed': 'None',\n",
@ -48588,10 +48588,10 @@
" 'load_module': 'newsspread',\n", " 'load_module': 'newsspread',\n",
" 'max_time': 30,\n", " 'max_time': 30,\n",
" 'name': 'Sim_all_dumb',\n", " 'name': 'Sim_all_dumb',\n",
" 'network_agents': [{'agent_type': 'DumbViewer',\n", " 'network_agents': [{'agent_class': 'DumbViewer',\n",
" 'state': {'has_tv': False},\n", " 'state': {'has_tv': False},\n",
" 'weight': 1},\n", " 'weight': 1},\n",
" {'agent_type': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n", " {'agent_class': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n",
" 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n", " 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n",
" 'num_trials': 50,\n", " 'num_trials': 50,\n",
" 'seed': 'None',\n", " 'seed': 'None',\n",
@ -50202,10 +50202,10 @@
" 'load_module': 'newsspread',\n", " 'load_module': 'newsspread',\n",
" 'max_time': 30,\n", " 'max_time': 30,\n",
" 'name': 'Sim_all_dumb',\n", " 'name': 'Sim_all_dumb',\n",
" 'network_agents': [{'agent_type': 'DumbViewer',\n", " 'network_agents': [{'agent_class': 'DumbViewer',\n",
" 'state': {'has_tv': False},\n", " 'state': {'has_tv': False},\n",
" 'weight': 1},\n", " 'weight': 1},\n",
" {'agent_type': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n", " {'agent_class': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n",
" 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n", " 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n",
" 'num_trials': 50,\n", " 'num_trials': 50,\n",
" 'seed': 'None',\n", " 'seed': 'None',\n",
@ -51819,10 +51819,10 @@
" 'load_module': 'newsspread',\n", " 'load_module': 'newsspread',\n",
" 'max_time': 30,\n", " 'max_time': 30,\n",
" 'name': 'Sim_all_dumb',\n", " 'name': 'Sim_all_dumb',\n",
" 'network_agents': [{'agent_type': 'DumbViewer',\n", " 'network_agents': [{'agent_class': 'DumbViewer',\n",
" 'state': {'has_tv': False},\n", " 'state': {'has_tv': False},\n",
" 'weight': 1},\n", " 'weight': 1},\n",
" {'agent_type': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n", " {'agent_class': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n",
" 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n", " 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n",
" 'num_trials': 50,\n", " 'num_trials': 50,\n",
" 'seed': 'None',\n", " 'seed': 'None',\n",
@ -53436,10 +53436,10 @@
" 'load_module': 'newsspread',\n", " 'load_module': 'newsspread',\n",
" 'max_time': 30,\n", " 'max_time': 30,\n",
" 'name': 'Sim_all_dumb',\n", " 'name': 'Sim_all_dumb',\n",
" 'network_agents': [{'agent_type': 'DumbViewer',\n", " 'network_agents': [{'agent_class': 'DumbViewer',\n",
" 'state': {'has_tv': False},\n", " 'state': {'has_tv': False},\n",
" 'weight': 1},\n", " 'weight': 1},\n",
" {'agent_type': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n", " {'agent_class': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n",
" 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n", " 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n",
" 'num_trials': 50,\n", " 'num_trials': 50,\n",
" 'seed': 'None',\n", " 'seed': 'None',\n",
@ -55041,10 +55041,10 @@
" 'load_module': 'newsspread',\n", " 'load_module': 'newsspread',\n",
" 'max_time': 30,\n", " 'max_time': 30,\n",
" 'name': 'Sim_all_dumb',\n", " 'name': 'Sim_all_dumb',\n",
" 'network_agents': [{'agent_type': 'DumbViewer',\n", " 'network_agents': [{'agent_class': 'DumbViewer',\n",
" 'state': {'has_tv': False},\n", " 'state': {'has_tv': False},\n",
" 'weight': 1},\n", " 'weight': 1},\n",
" {'agent_type': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n", " {'agent_class': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n",
" 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n", " 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n",
" 'num_trials': 50,\n", " 'num_trials': 50,\n",
" 'seed': 'None',\n", " 'seed': 'None',\n",
@ -56655,10 +56655,10 @@
" 'load_module': 'newsspread',\n", " 'load_module': 'newsspread',\n",
" 'max_time': 30,\n", " 'max_time': 30,\n",
" 'name': 'Sim_all_dumb',\n", " 'name': 'Sim_all_dumb',\n",
" 'network_agents': [{'agent_type': 'DumbViewer',\n", " 'network_agents': [{'agent_class': 'DumbViewer',\n",
" 'state': {'has_tv': False},\n", " 'state': {'has_tv': False},\n",
" 'weight': 1},\n", " 'weight': 1},\n",
" {'agent_type': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n", " {'agent_class': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n",
" 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n", " 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n",
" 'num_trials': 50,\n", " 'num_trials': 50,\n",
" 'seed': 'None',\n", " 'seed': 'None',\n",
@ -58257,10 +58257,10 @@
" 'load_module': 'newsspread',\n", " 'load_module': 'newsspread',\n",
" 'max_time': 30,\n", " 'max_time': 30,\n",
" 'name': 'Sim_all_dumb',\n", " 'name': 'Sim_all_dumb',\n",
" 'network_agents': [{'agent_type': 'DumbViewer',\n", " 'network_agents': [{'agent_class': 'DumbViewer',\n",
" 'state': {'has_tv': False},\n", " 'state': {'has_tv': False},\n",
" 'weight': 1},\n", " 'weight': 1},\n",
" {'agent_type': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n", " {'agent_class': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n",
" 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n", " 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n",
" 'num_trials': 50,\n", " 'num_trials': 50,\n",
" 'seed': 'None',\n", " 'seed': 'None',\n",
@ -59877,10 +59877,10 @@
" 'load_module': 'newsspread',\n", " 'load_module': 'newsspread',\n",
" 'max_time': 30,\n", " 'max_time': 30,\n",
" 'name': 'Sim_all_dumb',\n", " 'name': 'Sim_all_dumb',\n",
" 'network_agents': [{'agent_type': 'DumbViewer',\n", " 'network_agents': [{'agent_class': 'DumbViewer',\n",
" 'state': {'has_tv': False},\n", " 'state': {'has_tv': False},\n",
" 'weight': 1},\n", " 'weight': 1},\n",
" {'agent_type': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n", " {'agent_class': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n",
" 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n", " 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n",
" 'num_trials': 50,\n", " 'num_trials': 50,\n",
" 'seed': 'None',\n", " 'seed': 'None',\n",
@ -61494,10 +61494,10 @@
" 'load_module': 'newsspread',\n", " 'load_module': 'newsspread',\n",
" 'max_time': 30,\n", " 'max_time': 30,\n",
" 'name': 'Sim_all_dumb',\n", " 'name': 'Sim_all_dumb',\n",
" 'network_agents': [{'agent_type': 'DumbViewer',\n", " 'network_agents': [{'agent_class': 'DumbViewer',\n",
" 'state': {'has_tv': False},\n", " 'state': {'has_tv': False},\n",
" 'weight': 1},\n", " 'weight': 1},\n",
" {'agent_type': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n", " {'agent_class': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n",
" 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n", " 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n",
" 'num_trials': 50,\n", " 'num_trials': 50,\n",
" 'seed': 'None',\n", " 'seed': 'None',\n",
@ -63108,10 +63108,10 @@
" 'load_module': 'newsspread',\n", " 'load_module': 'newsspread',\n",
" 'max_time': 30,\n", " 'max_time': 30,\n",
" 'name': 'Sim_all_dumb',\n", " 'name': 'Sim_all_dumb',\n",
" 'network_agents': [{'agent_type': 'DumbViewer',\n", " 'network_agents': [{'agent_class': 'DumbViewer',\n",
" 'state': {'has_tv': False},\n", " 'state': {'has_tv': False},\n",
" 'weight': 1},\n", " 'weight': 1},\n",
" {'agent_type': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n", " {'agent_class': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n",
" 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n", " 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n",
" 'num_trials': 50,\n", " 'num_trials': 50,\n",
" 'seed': 'None',\n", " 'seed': 'None',\n",
@ -64713,10 +64713,10 @@
" 'load_module': 'newsspread',\n", " 'load_module': 'newsspread',\n",
" 'max_time': 30,\n", " 'max_time': 30,\n",
" 'name': 'Sim_all_dumb',\n", " 'name': 'Sim_all_dumb',\n",
" 'network_agents': [{'agent_type': 'DumbViewer',\n", " 'network_agents': [{'agent_class': 'DumbViewer',\n",
" 'state': {'has_tv': False},\n", " 'state': {'has_tv': False},\n",
" 'weight': 1},\n", " 'weight': 1},\n",
" {'agent_type': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n", " {'agent_class': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n",
" 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n", " 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n",
" 'num_trials': 50,\n", " 'num_trials': 50,\n",
" 'seed': 'None',\n", " 'seed': 'None',\n",
@ -66330,10 +66330,10 @@
" 'load_module': 'newsspread',\n", " 'load_module': 'newsspread',\n",
" 'max_time': 30,\n", " 'max_time': 30,\n",
" 'name': 'Sim_all_dumb',\n", " 'name': 'Sim_all_dumb',\n",
" 'network_agents': [{'agent_type': 'DumbViewer',\n", " 'network_agents': [{'agent_class': 'DumbViewer',\n",
" 'state': {'has_tv': False},\n", " 'state': {'has_tv': False},\n",
" 'weight': 1},\n", " 'weight': 1},\n",
" {'agent_type': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n", " {'agent_class': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n",
" 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n", " 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n",
" 'num_trials': 50,\n", " 'num_trials': 50,\n",
" 'seed': 'None',\n", " 'seed': 'None',\n",
@ -67947,10 +67947,10 @@
" 'load_module': 'newsspread',\n", " 'load_module': 'newsspread',\n",
" 'max_time': 30,\n", " 'max_time': 30,\n",
" 'name': 'Sim_all_dumb',\n", " 'name': 'Sim_all_dumb',\n",
" 'network_agents': [{'agent_type': 'DumbViewer',\n", " 'network_agents': [{'agent_class': 'DumbViewer',\n",
" 'state': {'has_tv': False},\n", " 'state': {'has_tv': False},\n",
" 'weight': 1},\n", " 'weight': 1},\n",
" {'agent_type': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n", " {'agent_class': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n",
" 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n", " 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n",
" 'num_trials': 50,\n", " 'num_trials': 50,\n",
" 'seed': 'None',\n", " 'seed': 'None',\n",
@ -69561,10 +69561,10 @@
" 'load_module': 'newsspread',\n", " 'load_module': 'newsspread',\n",
" 'max_time': 30,\n", " 'max_time': 30,\n",
" 'name': 'Sim_all_dumb',\n", " 'name': 'Sim_all_dumb',\n",
" 'network_agents': [{'agent_type': 'DumbViewer',\n", " 'network_agents': [{'agent_class': 'DumbViewer',\n",
" 'state': {'has_tv': False},\n", " 'state': {'has_tv': False},\n",
" 'weight': 1},\n", " 'weight': 1},\n",
" {'agent_type': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n", " {'agent_class': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n",
" 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n", " 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n",
" 'num_trials': 50,\n", " 'num_trials': 50,\n",
" 'seed': 'None',\n", " 'seed': 'None',\n",
@ -71178,10 +71178,10 @@
" 'load_module': 'newsspread',\n", " 'load_module': 'newsspread',\n",
" 'max_time': 30,\n", " 'max_time': 30,\n",
" 'name': 'Sim_all_dumb',\n", " 'name': 'Sim_all_dumb',\n",
" 'network_agents': [{'agent_type': 'DumbViewer',\n", " 'network_agents': [{'agent_class': 'DumbViewer',\n",
" 'state': {'has_tv': False},\n", " 'state': {'has_tv': False},\n",
" 'weight': 1},\n", " 'weight': 1},\n",
" {'agent_type': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n", " {'agent_class': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n",
" 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n", " 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n",
" 'num_trials': 50,\n", " 'num_trials': 50,\n",
" 'seed': 'None',\n", " 'seed': 'None',\n",
@ -72801,10 +72801,10 @@
" 'load_module': 'newsspread',\n", " 'load_module': 'newsspread',\n",
" 'max_time': 30,\n", " 'max_time': 30,\n",
" 'name': 'Sim_all_dumb',\n", " 'name': 'Sim_all_dumb',\n",
" 'network_agents': [{'agent_type': 'DumbViewer',\n", " 'network_agents': [{'agent_class': 'DumbViewer',\n",
" 'state': {'has_tv': False},\n", " 'state': {'has_tv': False},\n",
" 'weight': 1},\n", " 'weight': 1},\n",
" {'agent_type': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n", " {'agent_class': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n",
" 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n", " 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n",
" 'num_trials': 50,\n", " 'num_trials': 50,\n",
" 'seed': 'None',\n", " 'seed': 'None',\n",
@ -74418,10 +74418,10 @@
" 'load_module': 'newsspread',\n", " 'load_module': 'newsspread',\n",
" 'max_time': 30,\n", " 'max_time': 30,\n",
" 'name': 'Sim_all_dumb',\n", " 'name': 'Sim_all_dumb',\n",
" 'network_agents': [{'agent_type': 'DumbViewer',\n", " 'network_agents': [{'agent_class': 'DumbViewer',\n",
" 'state': {'has_tv': False},\n", " 'state': {'has_tv': False},\n",
" 'weight': 1},\n", " 'weight': 1},\n",
" {'agent_type': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n", " {'agent_class': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n",
" 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n", " 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n",
" 'num_trials': 50,\n", " 'num_trials': 50,\n",
" 'seed': 'None',\n", " 'seed': 'None',\n",
@ -76035,10 +76035,10 @@
" 'load_module': 'newsspread',\n", " 'load_module': 'newsspread',\n",
" 'max_time': 30,\n", " 'max_time': 30,\n",
" 'name': 'Sim_all_dumb',\n", " 'name': 'Sim_all_dumb',\n",
" 'network_agents': [{'agent_type': 'DumbViewer',\n", " 'network_agents': [{'agent_class': 'DumbViewer',\n",
" 'state': {'has_tv': False},\n", " 'state': {'has_tv': False},\n",
" 'weight': 1},\n", " 'weight': 1},\n",
" {'agent_type': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n", " {'agent_class': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n",
" 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n", " 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n",
" 'num_trials': 50,\n", " 'num_trials': 50,\n",
" 'seed': 'None',\n", " 'seed': 'None',\n",
@ -77643,10 +77643,10 @@
" 'load_module': 'newsspread',\n", " 'load_module': 'newsspread',\n",
" 'max_time': 30,\n", " 'max_time': 30,\n",
" 'name': 'Sim_all_dumb',\n", " 'name': 'Sim_all_dumb',\n",
" 'network_agents': [{'agent_type': 'DumbViewer',\n", " 'network_agents': [{'agent_class': 'DumbViewer',\n",
" 'state': {'has_tv': False},\n", " 'state': {'has_tv': False},\n",
" 'weight': 1},\n", " 'weight': 1},\n",
" {'agent_type': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n", " {'agent_class': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n",
" 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n", " 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n",
" 'num_trials': 50,\n", " 'num_trials': 50,\n",
" 'seed': 'None',\n", " 'seed': 'None',\n",
@ -79260,10 +79260,10 @@
" 'load_module': 'newsspread',\n", " 'load_module': 'newsspread',\n",
" 'max_time': 30,\n", " 'max_time': 30,\n",
" 'name': 'Sim_all_dumb',\n", " 'name': 'Sim_all_dumb',\n",
" 'network_agents': [{'agent_type': 'DumbViewer',\n", " 'network_agents': [{'agent_class': 'DumbViewer',\n",
" 'state': {'has_tv': False},\n", " 'state': {'has_tv': False},\n",
" 'weight': 1},\n", " 'weight': 1},\n",
" {'agent_type': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n", " {'agent_class': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n",
" 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n", " 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n",
" 'num_trials': 50,\n", " 'num_trials': 50,\n",
" 'seed': 'None',\n", " 'seed': 'None',\n",

@ -30,6 +30,7 @@ agents:
times: 1 times: 1
environment: environment:
# In this group we are not specifying any topology # In this group we are not specifying any topology
topology: False
fixed: fixed:
- name: 'Environment Agent 1' - name: 'Environment Agent 1'
agent_class: CounterModel agent_class: CounterModel

@ -10,7 +10,7 @@ network_params:
n: 10 n: 10
n_edges: 5 n_edges: 5
network_agents: network_agents:
- agent_type: CounterModel - agent_class: CounterModel
weight: 1 weight: 1
state: state:
state_id: 0 state_id: 0

@ -1,6 +1,5 @@
from networkx import Graph from networkx import Graph
import networkx as nx import networkx as nx
from random import choice
def mygenerator(n=5, n_edges=5): def mygenerator(n=5, n_edges=5):
''' '''
@ -14,9 +13,9 @@ def mygenerator(n=5, n_edges=5):
for i in range(n_edges): for i in range(n_edges):
nodes = list(G.nodes) nodes = list(G.nodes)
n_in = choice(nodes) n_in = self.random.choice(nodes)
nodes.remove(n_in) # Avoid loops nodes.remove(n_in) # Avoid loops
n_out = choice(nodes) n_out = self.random.choice(nodes)
G.add_edge(n_in, n_out) G.add_edge(n_in, n_out)
return G return G
@ -24,4 +23,4 @@ def mygenerator(n=5, n_edges=5):

@ -27,8 +27,8 @@ if __name__ == '__main__':
import logging import logging
logging.basicConfig(level=logging.INFO) logging.basicConfig(level=logging.INFO)
from soil import Simulation from soil import Simulation
s = Simulation(network_agents=[{'ids': [0], 'agent_type': Fibonacci}, s = Simulation(network_agents=[{'ids': [0], 'agent_class': Fibonacci},
{'ids': [1], 'agent_type': Odds}], {'ids': [1], 'agent_class': Odds}],
network_params={"generator": "complete_graph", "n": 2}, network_params={"generator": "complete_graph", "n": 2},
max_time=100, max_time=100,
) )

@ -10,11 +10,11 @@ network_params:
generator: social_wealth.graph_generator generator: social_wealth.graph_generator
n: 5 n: 5
network_agents: network_agents:
- agent_type: social_wealth.SocialMoneyAgent - agent_class: social_wealth.SocialMoneyAgent
weight: 1 weight: 1
environment_class: social_wealth.MoneyEnv environment_class: social_wealth.MoneyEnv
environment_params: environment_params:
mesa_agent_type: social_wealth.MoneyAgent mesa_agent_class: social_wealth.MoneyAgent
N: 10 N: 10
width: 50 width: 50
height: 50 height: 50

@ -70,7 +70,7 @@ model_params = {
1, 1,
description="Choose how many agents to include in the model", description="Choose how many agents to include in the model",
), ),
"network_agents": [{"agent_type": SocialMoneyAgent}], "network_agents": [{"agent_class": SocialMoneyAgent}],
"height": UserSettableParameter( "height": UserSettableParameter(
"slider", "slider",
"height", "height",

@ -99,7 +99,7 @@ if __name__ == '__main__':
G = graph_generator() G = graph_generator()
fixed_params = {"topology": G, fixed_params = {"topology": G,
"width": 10, "width": 10,
"network_agents": [{"agent_type": SocialMoneyAgent, "network_agents": [{"agent_class": SocialMoneyAgent,
'weight': 1}], 'weight': 1}],
"height": 10} "height": 10}

@ -89,11 +89,11 @@
"max_time: 30\r\n", "max_time: 30\r\n",
"name: Sim_all_dumb\r\n", "name: Sim_all_dumb\r\n",
"network_agents:\r\n", "network_agents:\r\n",
"- agent_type: DumbViewer\r\n", "- agent_class: DumbViewer\r\n",
" state:\r\n", " state:\r\n",
" has_tv: false\r\n", " has_tv: false\r\n",
" weight: 1\r\n", " weight: 1\r\n",
"- agent_type: DumbViewer\r\n", "- agent_class: DumbViewer\r\n",
" state:\r\n", " state:\r\n",
" has_tv: true\r\n", " has_tv: true\r\n",
" weight: 1\r\n", " weight: 1\r\n",
@ -113,19 +113,19 @@
"max_time: 30\r\n", "max_time: 30\r\n",
"name: Sim_half_herd\r\n", "name: Sim_half_herd\r\n",
"network_agents:\r\n", "network_agents:\r\n",
"- agent_type: DumbViewer\r\n", "- agent_class: DumbViewer\r\n",
" state:\r\n", " state:\r\n",
" has_tv: false\r\n", " has_tv: false\r\n",
" weight: 1\r\n", " weight: 1\r\n",
"- agent_type: DumbViewer\r\n", "- agent_class: DumbViewer\r\n",
" state:\r\n", " state:\r\n",
" has_tv: true\r\n", " has_tv: true\r\n",
" weight: 1\r\n", " weight: 1\r\n",
"- agent_type: HerdViewer\r\n", "- agent_class: HerdViewer\r\n",
" state:\r\n", " state:\r\n",
" has_tv: false\r\n", " has_tv: false\r\n",
" weight: 1\r\n", " weight: 1\r\n",
"- agent_type: HerdViewer\r\n", "- agent_class: HerdViewer\r\n",
" state:\r\n", " state:\r\n",
" has_tv: true\r\n", " has_tv: true\r\n",
" weight: 1\r\n", " weight: 1\r\n",
@ -145,12 +145,12 @@
"max_time: 30\r\n", "max_time: 30\r\n",
"name: Sim_all_herd\r\n", "name: Sim_all_herd\r\n",
"network_agents:\r\n", "network_agents:\r\n",
"- agent_type: HerdViewer\r\n", "- agent_class: HerdViewer\r\n",
" state:\r\n", " state:\r\n",
" has_tv: true\r\n", " has_tv: true\r\n",
" id: neutral\r\n", " id: neutral\r\n",
" weight: 1\r\n", " weight: 1\r\n",
"- agent_type: HerdViewer\r\n", "- agent_class: HerdViewer\r\n",
" state:\r\n", " state:\r\n",
" has_tv: true\r\n", " has_tv: true\r\n",
" id: neutral\r\n", " id: neutral\r\n",
@ -172,12 +172,12 @@
"max_time: 30\r\n", "max_time: 30\r\n",
"name: Sim_wise_herd\r\n", "name: Sim_wise_herd\r\n",
"network_agents:\r\n", "network_agents:\r\n",
"- agent_type: HerdViewer\r\n", "- agent_class: HerdViewer\r\n",
" state:\r\n", " state:\r\n",
" has_tv: true\r\n", " has_tv: true\r\n",
" id: neutral\r\n", " id: neutral\r\n",
" weight: 1\r\n", " weight: 1\r\n",
"- agent_type: WiseViewer\r\n", "- agent_class: WiseViewer\r\n",
" state:\r\n", " state:\r\n",
" has_tv: true\r\n", " has_tv: true\r\n",
" weight: 1\r\n", " weight: 1\r\n",
@ -198,12 +198,12 @@
"max_time: 30\r\n", "max_time: 30\r\n",
"name: Sim_all_wise\r\n", "name: Sim_all_wise\r\n",
"network_agents:\r\n", "network_agents:\r\n",
"- agent_type: WiseViewer\r\n", "- agent_class: WiseViewer\r\n",
" state:\r\n", " state:\r\n",
" has_tv: true\r\n", " has_tv: true\r\n",
" id: neutral\r\n", " id: neutral\r\n",
" weight: 1\r\n", " weight: 1\r\n",
"- agent_type: WiseViewer\r\n", "- agent_class: WiseViewer\r\n",
" state:\r\n", " state:\r\n",
" has_tv: true\r\n", " has_tv: true\r\n",
" weight: 1\r\n", " weight: 1\r\n",

@ -8,11 +8,11 @@ interval: 1
max_time: 300 max_time: 300
name: Sim_all_dumb name: Sim_all_dumb
network_agents: network_agents:
- agent_type: newsspread.DumbViewer - agent_class: newsspread.DumbViewer
state: state:
has_tv: false has_tv: false
weight: 1 weight: 1
- agent_type: newsspread.DumbViewer - agent_class: newsspread.DumbViewer
state: state:
has_tv: true has_tv: true
weight: 1 weight: 1
@ -31,19 +31,19 @@ interval: 1
max_time: 300 max_time: 300
name: Sim_half_herd name: Sim_half_herd
network_agents: network_agents:
- agent_type: newsspread.DumbViewer - agent_class: newsspread.DumbViewer
state: state:
has_tv: false has_tv: false
weight: 1 weight: 1
- agent_type: newsspread.DumbViewer - agent_class: newsspread.DumbViewer
state: state:
has_tv: true has_tv: true
weight: 1 weight: 1
- agent_type: newsspread.HerdViewer - agent_class: newsspread.HerdViewer
state: state:
has_tv: false has_tv: false
weight: 1 weight: 1
- agent_type: newsspread.HerdViewer - agent_class: newsspread.HerdViewer
state: state:
has_tv: true has_tv: true
weight: 1 weight: 1
@ -62,12 +62,12 @@ interval: 1
max_time: 300 max_time: 300
name: Sim_all_herd name: Sim_all_herd
network_agents: network_agents:
- agent_type: newsspread.HerdViewer - agent_class: newsspread.HerdViewer
state: state:
has_tv: true has_tv: true
state_id: neutral state_id: neutral
weight: 1 weight: 1
- agent_type: newsspread.HerdViewer - agent_class: newsspread.HerdViewer
state: state:
has_tv: true has_tv: true
state_id: neutral state_id: neutral
@ -88,12 +88,12 @@ interval: 1
max_time: 300 max_time: 300
name: Sim_wise_herd name: Sim_wise_herd
network_agents: network_agents:
- agent_type: newsspread.HerdViewer - agent_class: newsspread.HerdViewer
state: state:
has_tv: true has_tv: true
state_id: neutral state_id: neutral
weight: 1 weight: 1
- agent_type: newsspread.WiseViewer - agent_class: newsspread.WiseViewer
state: state:
has_tv: true has_tv: true
weight: 1 weight: 1
@ -113,12 +113,12 @@ interval: 1
max_time: 300 max_time: 300
name: Sim_all_wise name: Sim_all_wise
network_agents: network_agents:
- agent_type: newsspread.WiseViewer - agent_class: newsspread.WiseViewer
state: state:
has_tv: true has_tv: true
state_id: neutral state_id: neutral
weight: 1 weight: 1
- agent_type: newsspread.WiseViewer - agent_class: newsspread.WiseViewer
state: state:
has_tv: true has_tv: true
weight: 1 weight: 1

@ -27,7 +27,7 @@ s = Simulation(name='Programmatic',
network_params={'generator': mygenerator}, network_params={'generator': mygenerator},
num_trials=1, num_trials=1,
max_time=100, max_time=100,
agent_type=MyAgent, agent_class=MyAgent,
dry_run=True) dry_run=True)

@ -1,6 +1,5 @@
from soil.agents import FSM, NetworkAgent, state, default_state from soil.agents import FSM, NetworkAgent, state, default_state
from soil import Environment from soil import Environment
from random import random, shuffle
from itertools import islice from itertools import islice
import logging import logging
@ -128,7 +127,7 @@ class Patron(FSM, NetworkAgent):
Try to become friends with another agent. The chances of Try to become friends with another agent. The chances of
success depend on both agents' openness. success depend on both agents' openness.
''' '''
if force or self['openness'] > random(): if force or self['openness'] > self.random.random():
self.env.add_edge(self, other_agent) self.env.add_edge(self, other_agent)
self.info('Made some friend {}'.format(other_agent)) self.info('Made some friend {}'.format(other_agent))
return True return True
@ -138,7 +137,7 @@ class Patron(FSM, NetworkAgent):
''' Look for random agents around me and try to befriend them''' ''' Look for random agents around me and try to befriend them'''
befriended = False befriended = False
k = int(10*self['openness']) k = int(10*self['openness'])
shuffle(others) self.random.shuffle(others)
for friend in islice(others, k): # random.choice >= 3.7 for friend in islice(others, k): # random.choice >= 3.7
if friend == self: if friend == self:
continue continue

@ -8,18 +8,18 @@ network_params:
generator: empty_graph generator: empty_graph
n: 30 n: 30
network_agents: network_agents:
- agent_type: pubcrawl.Patron - agent_class: pubcrawl.Patron
description: Extroverted patron description: Extroverted patron
state: state:
openness: 1.0 openness: 1.0
weight: 9 weight: 9
- agent_type: pubcrawl.Patron - agent_class: pubcrawl.Patron
description: Introverted patron description: Introverted patron
state: state:
openness: 0.1 openness: 0.1
weight: 1 weight: 1
environment_agents: environment_agents:
- agent_type: pubcrawl.Police - agent_class: pubcrawl.Police
environment_class: pubcrawl.CityPubs environment_class: pubcrawl.CityPubs
environment_params: environment_params:
altercations: 0 altercations: 0

@ -1,6 +1,5 @@
from soil.agents import FSM, state, default_state, BaseAgent, NetworkAgent from soil.agents import FSM, state, default_state, BaseAgent, NetworkAgent
from enum import Enum from enum import Enum
from random import random, choice
import logging import logging
import math import math
@ -57,10 +56,10 @@ class Male(RabbitModel):
# Males try to mate # Males try to mate
for f in self.get_agents(state_id=Female.fertile.id, for f in self.get_agents(state_id=Female.fertile.id,
agent_type=Female, agent_class=Female,
limit_neighbors=False, limit_neighbors=False,
limit=self.max_females): limit=self.max_females):
r = random() r = self.random.random()
if r < self['mating_prob']: if r < self['mating_prob']:
self.impregnate(f) self.impregnate(f)
break # Take a break break # Take a break
@ -85,11 +84,11 @@ class Female(RabbitModel):
self['pregnancy'] += 1 self['pregnancy'] += 1
self.debug('Pregnancy: {}'.format(self['pregnancy'])) self.debug('Pregnancy: {}'.format(self['pregnancy']))
if self['pregnancy'] >= self.gestation: if self['pregnancy'] >= self.gestation:
number_of_babies = int(8+4*random()) number_of_babies = int(8+4*self.random.random())
self.info('Having {} babies'.format(number_of_babies)) self.info('Having {} babies'.format(number_of_babies))
for i in range(number_of_babies): for i in range(number_of_babies):
state = {} state = {}
state['gender'] = choice(list(Genders)).value state['gender'] = self.random.choice(list(Genders)).value
child = self.env.add_node(self.__class__, state) child = self.env.add_node(self.__class__, state)
self.env.add_edge(self.id, child.id) self.env.add_edge(self.id, child.id)
self.env.add_edge(self['mate'], child.id) self.env.add_edge(self['mate'], child.id)
@ -124,8 +123,7 @@ class RandomAccident(BaseAgent):
for i in self.env.network_agents: for i in self.env.network_agents:
if i.state['id'] == i.dead.id: if i.state['id'] == i.dead.id:
continue continue
r = random() if self.prob(prob_death):
if r < prob_death:
self.debug('I killed a rabbit: {}'.format(i.id)) self.debug('I killed a rabbit: {}'.format(i.id))
rabbits_alive = self.env['rabbits_alive'] = rabbits_alive -1 rabbits_alive = self.env['rabbits_alive'] = rabbits_alive -1
self.log('Rabbits alive: {}'.format(self.env['rabbits_alive'])) self.log('Rabbits alive: {}'.format(self.env['rabbits_alive']))

@ -3,9 +3,9 @@ name: rabbits_example
max_time: 100 max_time: 100
interval: 1 interval: 1
seed: MySeed seed: MySeed
agent_type: rabbit_agents.RabbitModel agent_class: rabbit_agents.RabbitModel
environment_agents: environment_agents:
- agent_type: rabbit_agents.RandomAccident - agent_class: rabbit_agents.RandomAccident
environment_params: environment_params:
prob_death: 0.001 prob_death: 0.001
default_state: default_state:
@ -13,8 +13,8 @@ default_state:
topology: topology:
nodes: nodes:
- id: 1 - id: 1
agent_type: rabbit_agents.Male agent_class: rabbit_agents.Male
- id: 0 - id: 0
agent_type: rabbit_agents.Female agent_class: rabbit_agents.Female
directed: true directed: true
links: [] links: []

@ -4,7 +4,6 @@ Example of a fully programmatic simulation, without definition files.
''' '''
from soil import Simulation, agents from soil import Simulation, agents
from soil.time import Delta from soil.time import Delta
from random import expovariate
import logging import logging
@ -20,7 +19,7 @@ class MyAgent(agents.FSM):
@agents.state @agents.state
def ping(self): def ping(self):
self.info('Ping') self.info('Ping')
return self.pong, Delta(expovariate(1/16)) return self.pong, Delta(self.random.expovariate(1/16))
@agents.state @agents.state
def pong(self): def pong(self):
@ -29,15 +28,15 @@ class MyAgent(agents.FSM):
self.info(str(self.pong_counts)) self.info(str(self.pong_counts))
if self.pong_counts < 1: if self.pong_counts < 1:
return self.die() return self.die()
return None, Delta(expovariate(1/16)) return None, Delta(self.random.expovariate(1/16))
s = Simulation(name='Programmatic', s = Simulation(name='Programmatic',
network_agents=[{'agent_type': MyAgent, 'id': 0}], network_agents=[{'agent_class': MyAgent, 'id': 0}],
topology={'nodes': [{'id': 0}], 'links': []}, topology={'nodes': [{'id': 0}], 'links': []},
num_trials=1, num_trials=1,
max_time=100, max_time=100,
agent_type=MyAgent, agent_class=MyAgent,
dry_run=True) dry_run=True)

@ -13,11 +13,11 @@ template:
generator: complete_graph generator: complete_graph
n: 10 n: 10
network_agents: network_agents:
- agent_type: CounterModel - agent_class: CounterModel
weight: "{{ x1 }}" weight: "{{ x1 }}"
state: state:
state_id: 0 state_id: 0
- agent_type: AggregatedCounter - agent_class: AggregatedCounter
weight: "{{ 1 - x1 }}" weight: "{{ 1 - x1 }}"
environment_params: environment_params:
name: "{{ x3 }}" name: "{{ x3 }}"

@ -1,4 +1,3 @@
import random
import networkx as nx import networkx as nx
from soil.agents import Geo, NetworkAgent, FSM, state, default_state from soil.agents import Geo, NetworkAgent, FSM, state, default_state
from soil import Environment from soil import Environment
@ -26,26 +25,26 @@ class TerroristSpreadModel(FSM, Geo):
self.prob_interaction = model.environment_params['prob_interaction'] self.prob_interaction = model.environment_params['prob_interaction']
if self['id'] == self.civilian.id: # Civilian if self['id'] == self.civilian.id: # Civilian
self.mean_belief = random.uniform(0.00, 0.5) self.mean_belief = self.random.uniform(0.00, 0.5)
elif self['id'] == self.terrorist.id: # Terrorist elif self['id'] == self.terrorist.id: # Terrorist
self.mean_belief = random.uniform(0.8, 1.00) self.mean_belief = self.random.uniform(0.8, 1.00)
elif self['id'] == self.leader.id: # Leader elif self['id'] == self.leader.id: # Leader
self.mean_belief = 1.00 self.mean_belief = 1.00
else: else:
raise Exception('Invalid state id: {}'.format(self['id'])) raise Exception('Invalid state id: {}'.format(self['id']))
if 'min_vulnerability' in model.environment_params: if 'min_vulnerability' in model.environment_params:
self.vulnerability = random.uniform( model.environment_params['min_vulnerability'], model.environment_params['max_vulnerability'] ) self.vulnerability = self.random.uniform( model.environment_params['min_vulnerability'], model.environment_params['max_vulnerability'] )
else : else :
self.vulnerability = random.uniform( 0, model.environment_params['max_vulnerability'] ) self.vulnerability = self.random.uniform( 0, model.environment_params['max_vulnerability'] )
@state @state
def civilian(self): def civilian(self):
neighbours = list(self.get_neighboring_agents(agent_type=TerroristSpreadModel)) neighbours = list(self.get_neighboring_agents(agent_class=TerroristSpreadModel))
if len(neighbours) > 0: if len(neighbours) > 0:
# Only interact with some of the neighbors # Only interact with some of the neighbors
interactions = list(n for n in neighbours if random.random() <= self.prob_interaction) interactions = list(n for n in neighbours if self.random.random() <= self.prob_interaction)
influence = sum( self.degree(i) for i in interactions ) influence = sum( self.degree(i) for i in interactions )
mean_belief = sum( i.mean_belief * self.degree(i) / influence for i in interactions ) mean_belief = sum( i.mean_belief * self.degree(i) / influence for i in interactions )
mean_belief = mean_belief * self.information_spread_intensity + self.mean_belief * ( 1 - self.information_spread_intensity ) mean_belief = mean_belief * self.information_spread_intensity + self.mean_belief * ( 1 - self.information_spread_intensity )
@ -64,7 +63,7 @@ class TerroristSpreadModel(FSM, Geo):
@state @state
def terrorist(self): def terrorist(self):
neighbours = self.get_agents(state_id=[self.terrorist.id, self.leader.id], neighbours = self.get_agents(state_id=[self.terrorist.id, self.leader.id],
agent_type=TerroristSpreadModel, agent_class=TerroristSpreadModel,
limit_neighbors=True) limit_neighbors=True)
if len(neighbours) > 0: if len(neighbours) > 0:
influence = sum( self.degree(n) for n in neighbours ) influence = sum( self.degree(n) for n in neighbours )
@ -103,7 +102,7 @@ class TrainingAreaModel(FSM, Geo):
@default_state @default_state
@state @state
def terrorist(self): def terrorist(self):
for neighbour in self.get_neighboring_agents(agent_type=TerroristSpreadModel): for neighbour in self.get_neighboring_agents(agent_class=TerroristSpreadModel):
if neighbour.vulnerability > self.min_vulnerability: if neighbour.vulnerability > self.min_vulnerability:
neighbour.vulnerability = neighbour.vulnerability ** ( 1 - self.training_influence ) neighbour.vulnerability = neighbour.vulnerability ** ( 1 - self.training_influence )
@ -129,7 +128,7 @@ class HavenModel(FSM, Geo):
self.max_vulnerability = model.environment_params['max_vulnerability'] self.max_vulnerability = model.environment_params['max_vulnerability']
def get_occupants(self, **kwargs): def get_occupants(self, **kwargs):
return self.get_neighboring_agents(agent_type=TerroristSpreadModel, **kwargs) return self.get_neighboring_agents(agent_class=TerroristSpreadModel, **kwargs)
@state @state
def civilian(self): def civilian(self):
@ -182,15 +181,15 @@ class TerroristNetworkModel(TerroristSpreadModel):
def update_relationships(self): def update_relationships(self):
if self.count_neighboring_agents(state_id=self.civilian.id) == 0: if self.count_neighboring_agents(state_id=self.civilian.id) == 0:
close_ups = set(self.geo_search(radius=self.vision_range, agent_type=TerroristNetworkModel)) close_ups = set(self.geo_search(radius=self.vision_range, agent_class=TerroristNetworkModel))
step_neighbours = set(self.ego_search(self.sphere_influence, agent_type=TerroristNetworkModel, center=False)) step_neighbours = set(self.ego_search(self.sphere_influence, agent_class=TerroristNetworkModel, center=False))
neighbours = set(agent.id for agent in self.get_neighboring_agents(agent_type=TerroristNetworkModel)) neighbours = set(agent.id for agent in self.get_neighboring_agents(agent_class=TerroristNetworkModel))
search = (close_ups | step_neighbours) - neighbours search = (close_ups | step_neighbours) - neighbours
for agent in self.get_agents(search): for agent in self.get_agents(search):
social_distance = 1 / self.shortest_path_length(agent.id) social_distance = 1 / self.shortest_path_length(agent.id)
spatial_proximity = ( 1 - self.get_distance(agent.id) ) spatial_proximity = ( 1 - self.get_distance(agent.id) )
prob_new_interaction = self.weight_social_distance * social_distance + self.weight_link_distance * spatial_proximity prob_new_interaction = self.weight_social_distance * social_distance + self.weight_link_distance * spatial_proximity
if agent['id'] == agent.civilian.id and random.random() < prob_new_interaction: if agent['id'] == agent.civilian.id and self.random.random() < prob_new_interaction:
self.add_edge(agent) self.add_edge(agent)
break break

@ -8,19 +8,19 @@ network_params:
# theta: 20 # theta: 20
n: 100 n: 100
network_agents: network_agents:
- agent_type: TerroristNetworkModel.TerroristNetworkModel - agent_class: TerroristNetworkModel.TerroristNetworkModel
weight: 0.8 weight: 0.8
state: state:
id: civilian # Civilians id: civilian # Civilians
- agent_type: TerroristNetworkModel.TerroristNetworkModel - agent_class: TerroristNetworkModel.TerroristNetworkModel
weight: 0.1 weight: 0.1
state: state:
id: leader # Leaders id: leader # Leaders
- agent_type: TerroristNetworkModel.TrainingAreaModel - agent_class: TerroristNetworkModel.TrainingAreaModel
weight: 0.05 weight: 0.05
state: state:
id: terrorist # Terrorism id: terrorist # Terrorism
- agent_type: TerroristNetworkModel.HavenModel - agent_class: TerroristNetworkModel.HavenModel
weight: 0.05 weight: 0.05
state: state:
id: civilian # Civilian id: civilian # Civilian

@ -2,7 +2,7 @@
name: torvalds_example name: torvalds_example
max_time: 10 max_time: 10
interval: 2 interval: 2
agent_type: CounterModel agent_class: CounterModel
default_state: default_state:
skill_level: 'beginner' skill_level: 'beginner'
network_params: network_params:

@ -12330,11 +12330,11 @@ Notice how node 0 is the only one with a TV.</p>
<span class="n">sim</span> <span class="o">=</span> <span class="n">soil</span><span class="o">.</span><span class="n">Simulation</span><span class="p">(</span><span class="n">topology</span><span class="o">=</span><span class="n">G</span><span class="p">,</span> <span class="n">sim</span> <span class="o">=</span> <span class="n">soil</span><span class="o">.</span><span class="n">Simulation</span><span class="p">(</span><span class="n">topology</span><span class="o">=</span><span class="n">G</span><span class="p">,</span>
<span class="n">num_trials</span><span class="o">=</span><span class="mi">1</span><span class="p">,</span> <span class="n">num_trials</span><span class="o">=</span><span class="mi">1</span><span class="p">,</span>
<span class="n">max_time</span><span class="o">=</span><span class="n">MAX_TIME</span><span class="p">,</span> <span class="n">max_time</span><span class="o">=</span><span class="n">MAX_TIME</span><span class="p">,</span>
<span class="n">environment_agents</span><span class="o">=</span><span class="p">[{</span><span class="s1">&#39;agent_type&#39;</span><span class="p">:</span> <span class="n">NewsEnvironmentAgent</span><span class="p">,</span> <span class="n">environment_agents</span><span class="o">=</span><span class="p">[{</span><span class="s1">&#39;agent_class&#39;</span><span class="p">:</span> <span class="n">NewsEnvironmentAgent</span><span class="p">,</span>
<span class="s1">&#39;state&#39;</span><span class="p">:</span> <span class="p">{</span> <span class="s1">&#39;state&#39;</span><span class="p">:</span> <span class="p">{</span>
<span class="s1">&#39;event_time&#39;</span><span class="p">:</span> <span class="n">EVENT_TIME</span> <span class="s1">&#39;event_time&#39;</span><span class="p">:</span> <span class="n">EVENT_TIME</span>
<span class="p">}}],</span> <span class="p">}}],</span>
<span class="n">network_agents</span><span class="o">=</span><span class="p">[{</span><span class="s1">&#39;agent_type&#39;</span><span class="p">:</span> <span class="n">NewsSpread</span><span class="p">,</span> <span class="n">network_agents</span><span class="o">=</span><span class="p">[{</span><span class="s1">&#39;agent_class&#39;</span><span class="p">:</span> <span class="n">NewsSpread</span><span class="p">,</span>
<span class="s1">&#39;weight&#39;</span><span class="p">:</span> <span class="mi">1</span><span class="p">}],</span> <span class="s1">&#39;weight&#39;</span><span class="p">:</span> <span class="mi">1</span><span class="p">}],</span>
<span class="n">states</span><span class="o">=</span><span class="p">{</span><span class="mi">0</span><span class="p">:</span> <span class="p">{</span><span class="s1">&#39;has_tv&#39;</span><span class="p">:</span> <span class="kc">True</span><span class="p">}},</span> <span class="n">states</span><span class="o">=</span><span class="p">{</span><span class="mi">0</span><span class="p">:</span> <span class="p">{</span><span class="s1">&#39;has_tv&#39;</span><span class="p">:</span> <span class="kc">True</span><span class="p">}},</span>
<span class="n">default_state</span><span class="o">=</span><span class="p">{</span><span class="s1">&#39;has_tv&#39;</span><span class="p">:</span> <span class="kc">False</span><span class="p">},</span> <span class="n">default_state</span><span class="o">=</span><span class="p">{</span><span class="s1">&#39;has_tv&#39;</span><span class="p">:</span> <span class="kc">False</span><span class="p">},</span>
@ -12468,14 +12468,14 @@ For this demo, we will use a python dictionary:</p>
<span class="p">},</span> <span class="p">},</span>
<span class="s1">&#39;network_agents&#39;</span><span class="p">:</span> <span class="p">[</span> <span class="s1">&#39;network_agents&#39;</span><span class="p">:</span> <span class="p">[</span>
<span class="p">{</span> <span class="p">{</span>
<span class="s1">&#39;agent_type&#39;</span><span class="p">:</span> <span class="n">NewsSpread</span><span class="p">,</span> <span class="s1">&#39;agent_class&#39;</span><span class="p">:</span> <span class="n">NewsSpread</span><span class="p">,</span>
<span class="s1">&#39;weight&#39;</span><span class="p">:</span> <span class="mi">1</span><span class="p">,</span> <span class="s1">&#39;weight&#39;</span><span class="p">:</span> <span class="mi">1</span><span class="p">,</span>
<span class="s1">&#39;state&#39;</span><span class="p">:</span> <span class="p">{</span> <span class="s1">&#39;state&#39;</span><span class="p">:</span> <span class="p">{</span>
<span class="s1">&#39;has_tv&#39;</span><span class="p">:</span> <span class="kc">False</span> <span class="s1">&#39;has_tv&#39;</span><span class="p">:</span> <span class="kc">False</span>
<span class="p">}</span> <span class="p">}</span>
<span class="p">},</span> <span class="p">},</span>
<span class="p">{</span> <span class="p">{</span>
<span class="s1">&#39;agent_type&#39;</span><span class="p">:</span> <span class="n">NewsSpread</span><span class="p">,</span> <span class="s1">&#39;agent_class&#39;</span><span class="p">:</span> <span class="n">NewsSpread</span><span class="p">,</span>
<span class="s1">&#39;weight&#39;</span><span class="p">:</span> <span class="mi">2</span><span class="p">,</span> <span class="s1">&#39;weight&#39;</span><span class="p">:</span> <span class="mi">2</span><span class="p">,</span>
<span class="s1">&#39;state&#39;</span><span class="p">:</span> <span class="p">{</span> <span class="s1">&#39;state&#39;</span><span class="p">:</span> <span class="p">{</span>
<span class="s1">&#39;has_tv&#39;</span><span class="p">:</span> <span class="kc">True</span> <span class="s1">&#39;has_tv&#39;</span><span class="p">:</span> <span class="kc">True</span>
@ -12483,7 +12483,7 @@ For this demo, we will use a python dictionary:</p>
<span class="p">}</span> <span class="p">}</span>
<span class="p">],</span> <span class="p">],</span>
<span class="s1">&#39;environment_agents&#39;</span><span class="p">:[</span> <span class="s1">&#39;environment_agents&#39;</span><span class="p">:[</span>
<span class="p">{</span><span class="s1">&#39;agent_type&#39;</span><span class="p">:</span> <span class="n">NewsEnvironmentAgent</span><span class="p">,</span> <span class="p">{</span><span class="s1">&#39;agent_class&#39;</span><span class="p">:</span> <span class="n">NewsEnvironmentAgent</span><span class="p">,</span>
<span class="s1">&#39;state&#39;</span><span class="p">:</span> <span class="p">{</span> <span class="s1">&#39;state&#39;</span><span class="p">:</span> <span class="p">{</span>
<span class="s1">&#39;event_time&#39;</span><span class="p">:</span> <span class="mi">10</span> <span class="s1">&#39;event_time&#39;</span><span class="p">:</span> <span class="mi">10</span>
<span class="p">}</span> <span class="p">}</span>

@ -459,11 +459,11 @@
"sim = soil.Simulation(topology=G,\n", "sim = soil.Simulation(topology=G,\n",
" num_trials=1,\n", " num_trials=1,\n",
" max_time=MAX_TIME,\n", " max_time=MAX_TIME,\n",
" environment_agents=[{'agent_type': NewsEnvironmentAgent,\n", " environment_agents=[{'agent_class': NewsEnvironmentAgent,\n",
" 'state': {\n", " 'state': {\n",
" 'event_time': EVENT_TIME\n", " 'event_time': EVENT_TIME\n",
" }}],\n", " }}],\n",
" network_agents=[{'agent_type': NewsSpread,\n", " network_agents=[{'agent_class': NewsSpread,\n",
" 'weight': 1}],\n", " 'weight': 1}],\n",
" states={0: {'has_tv': True}},\n", " states={0: {'has_tv': True}},\n",
" default_state={'has_tv': False},\n", " default_state={'has_tv': False},\n",
@ -588,14 +588,14 @@
" },\n", " },\n",
" 'network_agents': [\n", " 'network_agents': [\n",
" {\n", " {\n",
" 'agent_type': NewsSpread,\n", " 'agent_class': NewsSpread,\n",
" 'weight': 1,\n", " 'weight': 1,\n",
" 'state': {\n", " 'state': {\n",
" 'has_tv': False\n", " 'has_tv': False\n",
" }\n", " }\n",
" },\n", " },\n",
" {\n", " {\n",
" 'agent_type': NewsSpread,\n", " 'agent_class': NewsSpread,\n",
" 'weight': 2,\n", " 'weight': 2,\n",
" 'state': {\n", " 'state': {\n",
" 'has_tv': True\n", " 'has_tv': True\n",
@ -603,7 +603,7 @@
" }\n", " }\n",
" ],\n", " ],\n",
" 'environment_agents':[\n", " 'environment_agents':[\n",
" {'agent_type': NewsEnvironmentAgent,\n", " {'agent_class': NewsEnvironmentAgent,\n",
" 'state': {\n", " 'state': {\n",
" 'event_time': 10\n", " 'event_time': 10\n",
" }\n", " }\n",

@ -1,4 +1,3 @@
import random
from . import FSM, state, default_state from . import FSM, state, default_state
@ -16,13 +15,13 @@ class BassModel(FSM):
@default_state @default_state
@state @state
def innovation(self): def innovation(self):
if random.random() < self.innovation_prob: if self.prob(self.innovation_prob):
self.sentimentCorrelation = 1 self.sentimentCorrelation = 1
return self.aware return self.aware
else: else:
aware_neighbors = self.get_neighboring_agents(state_id=self.aware.id) aware_neighbors = self.get_neighboring_agents(state_id=self.aware.id)
num_neighbors_aware = len(aware_neighbors) num_neighbors_aware = len(aware_neighbors)
if random.random() < (self['imitation_prob']*num_neighbors_aware): if self.prob((self['imitation_prob']*num_neighbors_aware)):
self.sentimentCorrelation = 1 self.sentimentCorrelation = 1
return self.aware return self.aware

@ -1,4 +1,3 @@
import random
from . import FSM, state, default_state from . import FSM, state, default_state
@ -39,10 +38,10 @@ class BigMarketModel(FSM):
@state @state
def enterprise(self): def enterprise(self):
if random.random() < self.tweet_probability: # Tweets if self.random.random() < self.tweet_probability: # Tweets
aware_neighbors = self.get_neighboring_agents(state_id=self.number_of_enterprises) # Nodes neighbour users aware_neighbors = self.get_neighboring_agents(state_id=self.number_of_enterprises) # Nodes neighbour users
for x in aware_neighbors: for x in aware_neighbors:
if random.uniform(0,10) < 5: if self.random.uniform(0,10) < 5:
x.sentiment_about[self.id] += 0.1 # Increments for enterprise x.sentiment_about[self.id] += 0.1 # Increments for enterprise
else: else:
x.sentiment_about[self.id] -= 0.1 # Decrements for enterprise x.sentiment_about[self.id] -= 0.1 # Decrements for enterprise
@ -57,11 +56,11 @@ class BigMarketModel(FSM):
@state @state
def user(self): def user(self):
if random.random() < self.tweet_probability: # Tweets if self.random.random() < self.tweet_probability: # Tweets
if random.random() < self.tweet_relevant_probability: # Tweets something relevant if self.random.random() < self.tweet_relevant_probability: # Tweets something relevant
# Tweet probability per enterprise # Tweet probability per enterprise
for i in range(len(self.enterprises)): for i in range(len(self.enterprises)):
random_num = random.random() random_num = self.random.random()
if random_num < self.tweet_probability_about[i]: if random_num < self.tweet_probability_about[i]:
# The condition is fulfilled, sentiments are evaluated towards that enterprise # The condition is fulfilled, sentiments are evaluated towards that enterprise
if self.sentiment_about[i] < 0: if self.sentiment_about[i] < 0:

@ -1,4 +1,3 @@
import random
from . import BaseAgent from . import BaseAgent
@ -23,7 +22,7 @@ class IndependentCascadeModel(BaseAgent):
def behaviour(self): def behaviour(self):
aware_neighbors_1_time_step = [] aware_neighbors_1_time_step = []
# Outside effects # Outside effects
if random.random() < self.innovation_prob: if self.prob(self.innovation_prob):
if self.state['id'] == 0: if self.state['id'] == 0:
self.state['id'] = 1 self.state['id'] = 1
self.state['sentimentCorrelation'] = 1 self.state['sentimentCorrelation'] = 1
@ -40,7 +39,7 @@ class IndependentCascadeModel(BaseAgent):
if x.state['time_awareness'] == (self.env.now-1): if x.state['time_awareness'] == (self.env.now-1):
aware_neighbors_1_time_step.append(x) aware_neighbors_1_time_step.append(x)
num_neighbors_aware = len(aware_neighbors_1_time_step) num_neighbors_aware = len(aware_neighbors_1_time_step)
if random.random() < (self.imitation_prob*num_neighbors_aware): if self.prob(self.imitation_prob*num_neighbors_aware):
self.state['id'] = 1 self.state['id'] = 1
self.state['sentimentCorrelation'] = 1 self.state['sentimentCorrelation'] = 1
else: else:

@ -1,4 +1,3 @@
import random
import numpy as np import numpy as np
from . import BaseAgent from . import BaseAgent
@ -24,23 +23,26 @@ class SpreadModelM2(BaseAgent):
def __init__(self, model=None, unique_id=0, state=()): def __init__(self, model=None, unique_id=0, state=()):
super().__init__(model=environment, unique_id=unique_id, state=state) super().__init__(model=environment, unique_id=unique_id, state=state)
self.prob_neutral_making_denier = np.random.normal(environment.environment_params['prob_neutral_making_denier'],
environment.environment_params['standard_variance'])
self.prob_infect = np.random.normal(environment.environment_params['prob_infect'], # Use a single generator with the same seed as `self.random`
environment.environment_params['standard_variance']) random = np.random.default_rng(seed=self._seed)
self.prob_neutral_making_denier = random.normal(environment.environment_params['prob_neutral_making_denier'],
environment.environment_params['standard_variance'])
self.prob_cured_healing_infected = np.random.normal(environment.environment_params['prob_cured_healing_infected'], self.prob_infect = random.normal(environment.environment_params['prob_infect'],
environment.environment_params['standard_variance']) environment.environment_params['standard_variance'])
self.prob_cured_vaccinate_neutral = np.random.normal(environment.environment_params['prob_cured_vaccinate_neutral'],
environment.environment_params['standard_variance'])
self.prob_vaccinated_healing_infected = np.random.normal(environment.environment_params['prob_vaccinated_healing_infected'], self.prob_cured_healing_infected = random.normal(environment.environment_params['prob_cured_healing_infected'],
environment.environment_params['standard_variance'])
self.prob_vaccinated_vaccinate_neutral = np.random.normal(environment.environment_params['prob_vaccinated_vaccinate_neutral'],
environment.environment_params['standard_variance'])
self.prob_generate_anti_rumor = np.random.normal(environment.environment_params['prob_generate_anti_rumor'],
environment.environment_params['standard_variance']) environment.environment_params['standard_variance'])
self.prob_cured_vaccinate_neutral = random.normal(environment.environment_params['prob_cured_vaccinate_neutral'],
environment.environment_params['standard_variance'])
self.prob_vaccinated_healing_infected = random.normal(environment.environment_params['prob_vaccinated_healing_infected'],
environment.environment_params['standard_variance'])
self.prob_vaccinated_vaccinate_neutral = random.normal(environment.environment_params['prob_vaccinated_vaccinate_neutral'],
environment.environment_params['standard_variance'])
self.prob_generate_anti_rumor = random.normal(environment.environment_params['prob_generate_anti_rumor'],
environment.environment_params['standard_variance'])
def step(self): def step(self):
@ -58,7 +60,7 @@ class SpreadModelM2(BaseAgent):
# Infected # Infected
infected_neighbors = self.get_neighboring_agents(state_id=1) infected_neighbors = self.get_neighboring_agents(state_id=1)
if len(infected_neighbors) > 0: if len(infected_neighbors) > 0:
if random.random() < self.prob_neutral_making_denier: if self.prob(self.prob_neutral_making_denier):
self.state['id'] = 3 # Vaccinated making denier self.state['id'] = 3 # Vaccinated making denier
def infected_behaviour(self): def infected_behaviour(self):
@ -66,7 +68,7 @@ class SpreadModelM2(BaseAgent):
# Neutral # Neutral
neutral_neighbors = self.get_neighboring_agents(state_id=0) neutral_neighbors = self.get_neighboring_agents(state_id=0)
for neighbor in neutral_neighbors: for neighbor in neutral_neighbors:
if random.random() < self.prob_infect: if self.prob(self.prob_infect):
neighbor.state['id'] = 1 # Infected neighbor.state['id'] = 1 # Infected
def cured_behaviour(self): def cured_behaviour(self):
@ -74,13 +76,13 @@ class SpreadModelM2(BaseAgent):
# Vaccinate # Vaccinate
neutral_neighbors = self.get_neighboring_agents(state_id=0) neutral_neighbors = self.get_neighboring_agents(state_id=0)
for neighbor in neutral_neighbors: for neighbor in neutral_neighbors:
if random.random() < self.prob_cured_vaccinate_neutral: if self.prob(self.prob_cured_vaccinate_neutral):
neighbor.state['id'] = 3 # Vaccinated neighbor.state['id'] = 3 # Vaccinated
# Cure # Cure
infected_neighbors = self.get_neighboring_agents(state_id=1) infected_neighbors = self.get_neighboring_agents(state_id=1)
for neighbor in infected_neighbors: for neighbor in infected_neighbors:
if random.random() < self.prob_cured_healing_infected: if self.prob(self.prob_cured_healing_infected):
neighbor.state['id'] = 2 # Cured neighbor.state['id'] = 2 # Cured
def vaccinated_behaviour(self): def vaccinated_behaviour(self):
@ -88,19 +90,19 @@ class SpreadModelM2(BaseAgent):
# Cure # Cure
infected_neighbors = self.get_neighboring_agents(state_id=1) infected_neighbors = self.get_neighboring_agents(state_id=1)
for neighbor in infected_neighbors: for neighbor in infected_neighbors:
if random.random() < self.prob_cured_healing_infected: if self.prob(self.prob_cured_healing_infected):
neighbor.state['id'] = 2 # Cured neighbor.state['id'] = 2 # Cured
# Vaccinate # Vaccinate
neutral_neighbors = self.get_neighboring_agents(state_id=0) neutral_neighbors = self.get_neighboring_agents(state_id=0)
for neighbor in neutral_neighbors: for neighbor in neutral_neighbors:
if random.random() < self.prob_cured_vaccinate_neutral: if self.prob(self.prob_cured_vaccinate_neutral):
neighbor.state['id'] = 3 # Vaccinated neighbor.state['id'] = 3 # Vaccinated
# Generate anti-rumor # Generate anti-rumor
infected_neighbors_2 = self.get_neighboring_agents(state_id=1) infected_neighbors_2 = self.get_neighboring_agents(state_id=1)
for neighbor in infected_neighbors_2: for neighbor in infected_neighbors_2:
if random.random() < self.prob_generate_anti_rumor: if self.prob(self.prob_generate_anti_rumor):
neighbor.state['id'] = 2 # Cured neighbor.state['id'] = 2 # Cured
@ -165,7 +167,7 @@ class ControlModelM2(BaseAgent):
# Infected # Infected
infected_neighbors = self.get_neighboring_agents(state_id=1) infected_neighbors = self.get_neighboring_agents(state_id=1)
if len(infected_neighbors) > 0: if len(infected_neighbors) > 0:
if random.random() < self.prob_neutral_making_denier: if self.random(self.prob_neutral_making_denier):
self.state['id'] = 3 # Vaccinated making denier self.state['id'] = 3 # Vaccinated making denier
def infected_behaviour(self): def infected_behaviour(self):
@ -173,7 +175,7 @@ class ControlModelM2(BaseAgent):
# Neutral # Neutral
neutral_neighbors = self.get_neighboring_agents(state_id=0) neutral_neighbors = self.get_neighboring_agents(state_id=0)
for neighbor in neutral_neighbors: for neighbor in neutral_neighbors:
if random.random() < self.prob_infect: if self.prob(self.prob_infect):
neighbor.state['id'] = 1 # Infected neighbor.state['id'] = 1 # Infected
self.state['visible'] = False self.state['visible'] = False
@ -183,13 +185,13 @@ class ControlModelM2(BaseAgent):
# Vaccinate # Vaccinate
neutral_neighbors = self.get_neighboring_agents(state_id=0) neutral_neighbors = self.get_neighboring_agents(state_id=0)
for neighbor in neutral_neighbors: for neighbor in neutral_neighbors:
if random.random() < self.prob_cured_vaccinate_neutral: if self.prob(self.prob_cured_vaccinate_neutral):
neighbor.state['id'] = 3 # Vaccinated neighbor.state['id'] = 3 # Vaccinated
# Cure # Cure
infected_neighbors = self.get_neighboring_agents(state_id=1) infected_neighbors = self.get_neighboring_agents(state_id=1)
for neighbor in infected_neighbors: for neighbor in infected_neighbors:
if random.random() < self.prob_cured_healing_infected: if self.prob(self.prob_cured_healing_infected):
neighbor.state['id'] = 2 # Cured neighbor.state['id'] = 2 # Cured
def vaccinated_behaviour(self): def vaccinated_behaviour(self):
@ -198,19 +200,19 @@ class ControlModelM2(BaseAgent):
# Cure # Cure
infected_neighbors = self.get_neighboring_agents(state_id=1) infected_neighbors = self.get_neighboring_agents(state_id=1)
for neighbor in infected_neighbors: for neighbor in infected_neighbors:
if random.random() < self.prob_cured_healing_infected: if self.prob(self.prob_cured_healing_infected):
neighbor.state['id'] = 2 # Cured neighbor.state['id'] = 2 # Cured
# Vaccinate # Vaccinate
neutral_neighbors = self.get_neighboring_agents(state_id=0) neutral_neighbors = self.get_neighboring_agents(state_id=0)
for neighbor in neutral_neighbors: for neighbor in neutral_neighbors:
if random.random() < self.prob_cured_vaccinate_neutral: if self.prob(self.prob_cured_vaccinate_neutral):
neighbor.state['id'] = 3 # Vaccinated neighbor.state['id'] = 3 # Vaccinated
# Generate anti-rumor # Generate anti-rumor
infected_neighbors_2 = self.get_neighboring_agents(state_id=1) infected_neighbors_2 = self.get_neighboring_agents(state_id=1)
for neighbor in infected_neighbors_2: for neighbor in infected_neighbors_2:
if random.random() < self.prob_generate_anti_rumor: if self.prob(self.prob_generate_anti_rumor):
neighbor.state['id'] = 2 # Cured neighbor.state['id'] = 2 # Cured
def beacon_off_behaviour(self): def beacon_off_behaviour(self):
@ -224,19 +226,19 @@ class ControlModelM2(BaseAgent):
# Cure (M2 feature added) # Cure (M2 feature added)
infected_neighbors = self.get_neighboring_agents(state_id=1) infected_neighbors = self.get_neighboring_agents(state_id=1)
for neighbor in infected_neighbors: for neighbor in infected_neighbors:
if random.random() < self.prob_generate_anti_rumor: if self.prob(self.prob_generate_anti_rumor):
neighbor.state['id'] = 2 # Cured neighbor.state['id'] = 2 # Cured
neutral_neighbors_infected = neighbor.get_neighboring_agents(state_id=0) neutral_neighbors_infected = neighbor.get_neighboring_agents(state_id=0)
for neighbor in neutral_neighbors_infected: for neighbor in neutral_neighbors_infected:
if random.random() < self.prob_generate_anti_rumor: if self.prob(self.prob_generate_anti_rumor):
neighbor.state['id'] = 3 # Vaccinated neighbor.state['id'] = 3 # Vaccinated
infected_neighbors_infected = neighbor.get_neighboring_agents(state_id=1) infected_neighbors_infected = neighbor.get_neighboring_agents(state_id=1)
for neighbor in infected_neighbors_infected: for neighbor in infected_neighbors_infected:
if random.random() < self.prob_generate_anti_rumor: if self.prob(self.prob_generate_anti_rumor):
neighbor.state['id'] = 2 # Cured neighbor.state['id'] = 2 # Cured
# Vaccinate # Vaccinate
neutral_neighbors = self.get_neighboring_agents(state_id=0) neutral_neighbors = self.get_neighboring_agents(state_id=0)
for neighbor in neutral_neighbors: for neighbor in neutral_neighbors:
if random.random() < self.prob_cured_vaccinate_neutral: if self.prob(self.prob_cured_vaccinate_neutral):
neighbor.state['id'] = 3 # Vaccinated neighbor.state['id'] = 3 # Vaccinated

@ -1,4 +1,3 @@
import random
import numpy as np import numpy as np
from . import FSM, state from . import FSM, state
@ -32,62 +31,64 @@ class SISaModel(FSM):
def __init__(self, environment, unique_id=0, state=()): def __init__(self, environment, unique_id=0, state=()):
super().__init__(model=environment, unique_id=unique_id, state=state) super().__init__(model=environment, unique_id=unique_id, state=state)
self.neutral_discontent_spon_prob = np.random.normal(self.env['neutral_discontent_spon_prob'], random = np.random.default_rng(seed=self._seed)
self.neutral_discontent_spon_prob = random.normal(self.env['neutral_discontent_spon_prob'],
self.env['standard_variance']) self.env['standard_variance'])
self.neutral_discontent_infected_prob = np.random.normal(self.env['neutral_discontent_infected_prob'], self.neutral_discontent_infected_prob = random.normal(self.env['neutral_discontent_infected_prob'],
self.env['standard_variance']) self.env['standard_variance'])
self.neutral_content_spon_prob = np.random.normal(self.env['neutral_content_spon_prob'], self.neutral_content_spon_prob = random.normal(self.env['neutral_content_spon_prob'],
self.env['standard_variance']) self.env['standard_variance'])
self.neutral_content_infected_prob = np.random.normal(self.env['neutral_content_infected_prob'], self.neutral_content_infected_prob = random.normal(self.env['neutral_content_infected_prob'],
self.env['standard_variance']) self.env['standard_variance'])
self.discontent_neutral = np.random.normal(self.env['discontent_neutral'], self.discontent_neutral = random.normal(self.env['discontent_neutral'],
self.env['standard_variance']) self.env['standard_variance'])
self.discontent_content = np.random.normal(self.env['discontent_content'], self.discontent_content = random.normal(self.env['discontent_content'],
self.env['variance_d_c']) self.env['variance_d_c'])
self.content_discontent = np.random.normal(self.env['content_discontent'], self.content_discontent = random.normal(self.env['content_discontent'],
self.env['variance_c_d']) self.env['variance_c_d'])
self.content_neutral = np.random.normal(self.env['content_neutral'], self.content_neutral = random.normal(self.env['content_neutral'],
self.env['standard_variance']) self.env['standard_variance'])
@state @state
def neutral(self): def neutral(self):
# Spontaneous effects # Spontaneous effects
if random.random() < self.neutral_discontent_spon_prob: if self.prob(self.neutral_discontent_spon_prob):
return self.discontent return self.discontent
if random.random() < self.neutral_content_spon_prob: if self.prob(self.neutral_content_spon_prob):
return self.content return self.content
# Infected # Infected
discontent_neighbors = self.count_neighboring_agents(state_id=self.discontent) discontent_neighbors = self.count_neighboring_agents(state_id=self.discontent)
if random.random() < discontent_neighbors * self.neutral_discontent_infected_prob: if self.prob(scontent_neighbors * self.neutral_discontent_infected_prob):
return self.discontent return self.discontent
content_neighbors = self.count_neighboring_agents(state_id=self.content.id) content_neighbors = self.count_neighboring_agents(state_id=self.content.id)
if random.random() < content_neighbors * self.neutral_content_infected_prob: if self.prob(s * self.neutral_content_infected_prob):
return self.content return self.content
return self.neutral return self.neutral
@state @state
def discontent(self): def discontent(self):
# Healing # Healing
if random.random() < self.discontent_neutral: if self.prob(self.discontent_neutral):
return self.neutral return self.neutral
# Superinfected # Superinfected
content_neighbors = self.count_neighboring_agents(state_id=self.content.id) content_neighbors = self.count_neighboring_agents(state_id=self.content.id)
if random.random() < content_neighbors * self.discontent_content: if self.prob(s * self.discontent_content):
return self.content return self.content
return self.discontent return self.discontent
@state @state
def content(self): def content(self):
# Healing # Healing
if random.random() < self.content_neutral: if self.prob(self.content_neutral):
return self.neutral return self.neutral
# Superinfected # Superinfected
discontent_neighbors = self.count_neighboring_agents(state_id=self.discontent.id) discontent_neighbors = self.count_neighboring_agents(state_id=self.discontent.id)
if random.random() < discontent_neighbors * self.content_discontent: if self.prob(scontent_neighbors * self.content_discontent):
self.discontent self.discontent
return self.content return self.content

@ -1,4 +1,3 @@
import random
from . import BaseAgent from . import BaseAgent
@ -68,10 +67,10 @@ class SentimentCorrelationModel(BaseAgent):
disgust_prob = self.disgust_prob+(len(disgusted_neighbors_1_time_step)*self.disgust_prob) disgust_prob = self.disgust_prob+(len(disgusted_neighbors_1_time_step)*self.disgust_prob)
outside_effects_prob = self.outside_effects_prob outside_effects_prob = self.outside_effects_prob
num = random.random() num = self.random.random()
if num<outside_effects_prob: if num<outside_effects_prob:
self.state['id'] = random.randint(1, 4) self.state['id'] = self.random.randint(1, 4)
self.state['sentimentCorrelation'] = self.state['id'] # It is stored when it has been infected for the dynamic network self.state['sentimentCorrelation'] = self.state['id'] # It is stored when it has been infected for the dynamic network
self.state['time_awareness'][self.state['id']-1] = self.env.now self.state['time_awareness'][self.state['id']-1] = self.env.now

@ -2,7 +2,7 @@ import logging
from collections import OrderedDict, defaultdict from collections import OrderedDict, defaultdict
from collections.abc import MutableMapping, Mapping, Set from collections.abc import MutableMapping, Mapping, Set
from abc import ABCMeta from abc import ABCMeta
from copy import deepcopy from copy import deepcopy, copy
from functools import partial, wraps from functools import partial, wraps
from itertools import islice, chain from itertools import islice, chain
import json import json
@ -11,8 +11,6 @@ import networkx as nx
from mesa import Agent as MesaAgent from mesa import Agent as MesaAgent
from typing import Dict, List from typing import Dict, List
from random import shuffle
from .. import serialization, utils, time, config from .. import serialization, utils, time, config
@ -28,6 +26,7 @@ IGNORED_FIELDS = ('model', 'logger')
class DeadAgent(Exception): class DeadAgent(Exception):
pass pass
class BaseAgent(MesaAgent, MutableMapping): class BaseAgent(MesaAgent, MutableMapping):
""" """
A special type of Mesa Agent that: A special type of Mesa Agent that:
@ -82,6 +81,9 @@ class BaseAgent(MesaAgent, MutableMapping):
def __hash__(self): def __hash__(self):
return hash(self.unique_id) return hash(self.unique_id)
def prob(self, probability):
return prob(probability, self.model.random)
# TODO: refactor to clean up mesa compatibility # TODO: refactor to clean up mesa compatibility
@property @property
def id(self): def id(self):
@ -356,7 +358,7 @@ class FSM(BaseAgent, metaclass=MetaFSM):
return state return state
def prob(prob=1): def prob(prob, random):
''' '''
A true/False uniform distribution with a given probability. A true/False uniform distribution with a given probability.
To be used like this: To be used like this:
@ -474,7 +476,7 @@ def _convert_agent_classs(ind, to_string=False, **kwargs):
return deserialize_definition(ind, **kwargs) return deserialize_definition(ind, **kwargs)
def _agent_from_definition(definition, value=-1, unique_id=None): def _agent_from_definition(definition, random, value=-1, unique_id=None):
"""Used in the initialization of agents given an agent distribution.""" """Used in the initialization of agents given an agent distribution."""
if value < 0: if value < 0:
value = random.random() value = random.random()
@ -491,7 +493,7 @@ def _agent_from_definition(definition, value=-1, unique_id=None):
raise Exception('Definition for value {} not found in: {}'.format(value, definition)) raise Exception('Definition for value {} not found in: {}'.format(value, definition))
def _definition_to_dict(definition, size=None, default_state=None): def _definition_to_dict(definition, random, size=None, default_state=None):
state = default_state or {} state = default_state or {}
agents = {} agents = {}
remaining = {} remaining = {}
@ -668,7 +670,7 @@ def filter_group(group, *id_args, unique_id=None, state_id=None, agent_class=Non
yield from f yield from f
def from_config(cfg: Dict[str, config.AgentConfig], env): def from_config(cfg: Dict[str, config.AgentConfig], env, random):
''' '''
Agents are specified in groups. Agents are specified in groups.
Each group can be specified in two ways, either through a fixed list in which each item has Each group can be specified in two ways, either through a fixed list in which each item has
@ -677,10 +679,15 @@ def from_config(cfg: Dict[str, config.AgentConfig], env):
of each agent type. of each agent type.
''' '''
default = cfg.get('default', None) default = cfg.get('default', None)
return {k: _group_from_config(c, default=default, env=env) for (k, c) in cfg.items() if k is not 'default'} return {k: _group_from_config(c, default=default, env=env, random=random) for (k, c) in cfg.items() if k is not 'default'}
def _group_from_config(cfg: config.AgentConfig, default: config.SingleAgentConfig, env, random):
if cfg and not isinstance(cfg, config.AgentConfig):
cfg = config.AgentConfig(**cfg)
if default and not isinstance(default, config.SingleAgentConfig):
default = config.SingleAgentConfig(**default)
def _group_from_config(cfg: config.AgentConfig, default: config.SingleAgentConfig, env):
agents = {} agents = {}
if cfg.fixed is not None: if cfg.fixed is not None:
agents = _from_fixed(cfg.fixed, topology=cfg.topology, default=default, env=env) agents = _from_fixed(cfg.fixed, topology=cfg.topology, default=default, env=env)
@ -690,7 +697,7 @@ def _group_from_config(cfg: config.AgentConfig, default: config.SingleAgentConfi
agents.update(_from_distro(cfg.distribution, target, agents.update(_from_distro(cfg.distribution, target,
topology=cfg.topology or default.topology, topology=cfg.topology or default.topology,
default=default, default=default,
env=env)) env=env, random=random))
assert len(agents) == n assert len(agents) == n
if cfg.override: if cfg.override:
for attrs in cfg.override: for attrs in cfg.override:
@ -733,7 +740,8 @@ def _from_distro(distro: List[config.AgentDistro],
n: int, n: int,
topology: str, topology: str,
default: config.SingleAgentConfig, default: config.SingleAgentConfig,
env): env,
random):
agents = {} agents = {}

@ -9,19 +9,6 @@ from typing import Any, Callable, Dict, List, Optional, Union, Type
from pydantic import BaseModel, Extra from pydantic import BaseModel, Extra
import networkx as nx import networkx as nx
class General(BaseModel):
id: str = 'Unnamed Simulation'
group: str = None
dir_path: Optional[str] = None
num_trials: int = 1
max_time: float = 100
interval: float = 1
seed: str = ""
@staticmethod
def default():
return General()
# Could use TypeAlias in python >= 3.10 # Could use TypeAlias in python >= 3.10
nodeId = int nodeId = int
@ -125,10 +112,18 @@ class AgentConfig(SingleAgentConfig):
class Config(BaseModel, extra=Extra.forbid): class Config(BaseModel, extra=Extra.forbid):
version: Optional[str] = '1' version: Optional[str] = '1'
general: General = General.default()
topologies: Optional[Dict[str, NetConfig]] = {} id: str = 'Unnamed Simulation'
environment: EnvConfig = EnvConfig.default() group: str = None
agents: Optional[Dict[str, AgentConfig]] = {} dir_path: Optional[str] = None
num_trials: int = 1
max_time: float = 100
interval: float = 1
seed: str = ""
model_class: Union[Type, str]
model_parameters: Optiona[Dict[str, Any]] = {}
def convert_old(old, strict=True): def convert_old(old, strict=True):
''' '''
@ -137,9 +132,13 @@ def convert_old(old, strict=True):
This is still a work in progress and might not work in many cases. This is still a work in progress and might not work in many cases.
''' '''
#TODO: implement actual conversion
print('The old configuration format is no longer supported. \
Update your config files or run Soil==0.20')
raise NotImplementedError()
new = {}
new = {}
general = {} general = {}
for k in ['id', for k in ['id',
@ -173,8 +172,8 @@ def convert_old(old, strict=True):
'default': {}, 'default': {},
} }
if 'agent_type' in old: if 'agent_class' in old:
agents['default']['agent_class'] = old['agent_type'] agents['default']['agent_class'] = old['agent_class']
if 'default_state' in old: if 'default_state' in old:
agents['default']['state'] = old['default_state'] agents['default']['state'] = old['default_state']
@ -182,8 +181,8 @@ def convert_old(old, strict=True):
def updated_agent(agent): def updated_agent(agent):
newagent = dict(agent) newagent = dict(agent)
newagent['agent_class'] = newagent['agent_type'] newagent['agent_class'] = newagent['agent_class']
del newagent['agent_type'] del newagent['agent_class']
return newagent return newagent
for agent in old.get('environment_agents', []): for agent in old.get('environment_agents', []):
@ -207,9 +206,9 @@ def convert_old(old, strict=True):
else: else:
by_weight.append(agent) by_weight.append(agent)
if 'agent_type' in old and (not fixed and not by_weight): if 'agent_class' in old and (not fixed and not by_weight):
agents['network']['topology'] = 'default' agents['network']['topology'] = 'default'
by_weight = [{'agent_class': old['agent_type']}] by_weight = [{'agent_class': old['agent_class']}]
# TODO: translate states properly # TODO: translate states properly

@ -2,23 +2,5 @@ from mesa import DataCollector as MDC
class SoilDataCollector(MDC): class SoilDataCollector(MDC):
def __init__(self, *args, **kwargs):
def __init__(self, environment, *args, **kwargs):
super().__init__(*args, **kwargs) super().__init__(*args, **kwargs)
# Populate model and env reporters so they have a key per
# So they can be shown in the web interface
self.environment = environment
raise NotImplementedError()
@property
def model_vars(self):
raise NotImplementedError()
@model_vars.setter
def model_vars(self, value):
raise NotImplementedError()
@property
def agent_reporters(self):
raise NotImplementedError()

@ -5,7 +5,7 @@ import math
import random import random
import logging import logging
from typing import Dict from typing import Any, Dict, Optional, Union
from collections import namedtuple from collections import namedtuple
from time import time as current_time from time import time as current_time
from copy import deepcopy from copy import deepcopy
@ -17,20 +17,24 @@ import networkx as nx
from mesa import Model from mesa import Model
from mesa.datacollection import DataCollector from mesa.datacollection import DataCollector
from . import serialization, agents, analysis, utils, time, config, network from . import serialization, analysis, utils, time, network
from .agents import AgentView, BaseAgent, NetworkAgent, from_config as agents_from_config
Record = namedtuple('Record', 'dict_id t_step key value') Record = namedtuple('Record', 'dict_id t_step key value')
class Environment(Model): class BaseEnvironment(Model):
""" """
The environment is key in a simulation. It contains the network topology, The environment is key in a simulation. It controls how agents interact,
a reference to network and environment agents, as well as the environment and what information is available to them.
params, which are used as shared state between agents.
This is an opinionated version of `mesa.Model` class, which adds many
convenience methods and abstractions.
The environment parameters and the state of every agent can be accessed The environment parameters and the state of every agent can be accessed
both by using the environment as a dictionary or with the environment's both by using the environment as a dictionary and with the environment's
:meth:`soil.environment.Environment.get` method. :meth:`soil.environment.Environment.get` method.
""" """
@ -40,67 +44,62 @@ class Environment(Model):
schedule=None, schedule=None,
dir_path=None, dir_path=None,
interval=1, interval=1,
agents: Dict[str, config.AgentConfig] = {}, agent_class=BaseAgent,
topologies: Dict[str, config.NetConfig] = {}, agents: [tuple[type, Dict[str, Any]]] = {},
agent_reporters: Optional[Any] = None, agent_reporters: Optional[Any] = None,
model_reporters: Optional[Any] = None, model_reporters: Optional[Any] = None,
tables: Optional[Any] = None, tables: Optional[Any] = None,
**env_params): **env_params):
super().__init__() super().__init__(seed=seed)
self.current_id = -1 self.current_id = -1
self.seed = '{}_{}'.format(seed, env_id)
self.id = env_id self.id = env_id
self.dir_path = dir_path or os.getcwd() self.dir_path = dir_path or os.getcwd()
if schedule is None: if schedule is None:
schedule = time.TimedActivation() schedule = time.TimedActivation(self)
self.schedule = schedule self.schedule = schedule
seed = seed or current_time() self.agent_class = agent_class
random.seed(seed)
self.init_agents(agents)
self.topologies = {}
self._node_ids = {}
for (name, cfg) in topologies.items():
self.set_topology(cfg=cfg,
graph=name)
self.agents = agents or {}
self.env_params = env_params or {} self.env_params = env_params or {}
self.interval = interval self.interval = interval
self['SEED'] = seed
self.logger = utils.logger.getChild(self.id) self.logger = utils.logger.getChild(self.id)
self.datacollector = DataCollector(model_reporters, agent_reporters, tables)
@property self.datacollector = DataCollector(
def topology(self): model_reporters=model_reporters,
return self.topologies['default'] agent_reporters=agent_reporters,
tables=tables,
)
def __read_agent_tuple(self, tup):
cls = self.agent_class
args = tup
if isinstance(tup, tuple):
cls = tup[0]
args = tup[1]
return serialization.deserialize(cls)(unique_id=self.next_id(),
model=self, **args)
def init_agents(self, agents: [tuple[type, Dict[str, Any]]] = {}):
agents = [self.__read_agent_tuple(tup) for tup in agents]
self._agents = {'default': {agent.id: agent for agent in agents}}
@property @property
def network_agents(self): def agents(self):
yield from self.agents(agent_class=agents.NetworkAgent) return AgentView(self._agents)
@staticmethod def find_one(self, *args, **kwargs):
def from_config(conf: config.Config, trial_id, **kwargs) -> Environment: return AgentView(self._agents).one(*args, **kwargs)
'''Create an environment for a trial of the simulation'''
conf = conf def count_agents(self, *args, **kwargs):
if kwargs: return sum(1 for i in self.agents(*args, **kwargs))
conf = config.Config(**conf.dict(exclude_defaults=True), **kwargs)
seed = '{}_{}'.format(conf.general.seed, trial_id)
id = '{}_trial_{}'.format(conf.general.id, trial_id).replace('.', '-')
opts = conf.environment.params.copy()
dir_path = conf.general.dir_path
opts.update(conf)
opts.update(kwargs)
env = serialization.deserialize(conf.environment.environment_class)(env_id=id, seed=seed, dir_path=dir_path, **opts)
return env
@property @property
def now(self): def now(self):
@ -109,115 +108,42 @@ class Environment(Model):
raise Exception('The environment has not been scheduled, so it has no sense of time') raise Exception('The environment has not been scheduled, so it has no sense of time')
def topology_for(self, agent_id): # def init_agent(self, agent_id, agent_definitions, state=None):
return self.topologies[self._node_ids[agent_id][0]] # state = state or {}
def node_id_for(self, agent_id):
return self._node_ids[agent_id][1]
def set_topology(self, cfg=None, dir_path=None, graph='default'):
topology = cfg
if not isinstance(cfg, nx.Graph):
topology = network.from_config(cfg, dir_path=dir_path or self.dir_path)
self.topologies[graph] = topology
@property
def agents(self):
return agents.AgentView(self._agents)
def count_agents(self, *args, **kwargs):
return sum(1 for i in self.find_all(*args, **kwargs))
def find_all(self, *args, **kwargs):
return agents.AgentView(self._agents).filter(*args, **kwargs)
def find_one(self, *args, **kwargs):
return agents.AgentView(self._agents).one(*args, **kwargs)
@agents.setter
def agents(self, agents_def: Dict[str, config.AgentConfig]):
self._agents = agents.from_config(agents_def, env=self)
for d in self._agents.values():
for a in d.values():
self.schedule.add(a)
def init_agent(self, agent_id, agent_definitions, graph='default'):
node = self.topologies[graph].nodes[agent_id]
init = False
state = dict(node)
agent_class = None # agent_class = None
if 'agent_class' in self.states.get(agent_id, {}): # if 'agent_class' in self.states.get(agent_id, {}):
agent_class = self.states[agent_id]['agent_class'] # agent_class = self.states[agent_id]['agent_class']
elif 'agent_class' in node: # elif 'agent_class' in self.default_state:
agent_class = node['agent_class'] # agent_class = self.default_state['agent_class']
elif 'agent_class' in self.default_state:
agent_class = self.default_state['agent_class']
if agent_class: # if agent_class:
agent_class = agents.deserialize_type(agent_class) # agent_class = agents.deserialize_type(agent_class)
elif agent_definitions: # elif agent_definitions:
agent_class, state = agents._agent_from_definition(agent_definitions, unique_id=agent_id) # agent_class, state = agents._agent_from_definition(agent_definitions, unique_id=agent_id)
else: # else:
serialization.logger.debug('Skipping node {}'.format(agent_id)) # serialization.logger.debug('Skipping agent {}'.format(agent_id))
return # return
return self.set_agent(agent_id, agent_class, state) # return self.add_agent(agent_id, agent_class, state)
def agent_to_node(self, agent_id, graph_name='default', node_id=None, shuffle=False):
#TODO: test
if node_id is None:
G = self.topologies[graph_name]
candidates = list(G.nodes(data=True))
if shuffle:
random.shuffle(candidates)
for next_id, data in candidates:
if data.get('agent_id', None) is None:
node_id = next_id
data['agent_id'] = agent_id
break
self._node_ids[agent_id] = (graph_name, node_id) def add_agent(self, agent_id, agent_class, state=None, graph='default'):
print(self._node_ids)
def set_agent(self, agent_id, agent_class, state=None, graph='default'):
node = self.topologies[graph].nodes[agent_id]
defstate = deepcopy(self.default_state) or {} defstate = deepcopy(self.default_state) or {}
defstate.update(self.states.get(agent_id, {})) defstate.update(self.states.get(agent_id, {}))
defstate.update(node.get('state', {}))
if state: if state:
defstate.update(state) defstate.update(state)
a = None a = None
if agent_class: if agent_class:
state = defstate state = defstate
a = agent_class(model=self, a = agent_class(model=self,
unique_id=agent_id unique_id=agent_id)
)
for (k, v) in state.items(): for (k, v) in state.items():
setattr(a, k, v) setattr(a, k, v)
node['agent'] = a
self.schedule.add(a) self.schedule.add(a)
return a return a
def add_node(self, agent_class, state=None, graph='default'):
agent_id = int(len(self.topologies[graph].nodes()))
self.topologies[graph].add_node(agent_id)
a = self.set_agent(agent_id, agent_class, state, graph=graph)
a['visible'] = True
return a
def add_edge(self, agent1, agent2, start=None, graph='default', **attrs):
if hasattr(agent1, 'id'):
agent1 = agent1.id
if hasattr(agent2, 'id'):
agent2 = agent2.id
start = start or self.now
return self.topologies[graph].add_edge(agent1, agent2, **attrs)
def log(self, message, *args, level=logging.INFO, **kwargs): def log(self, message, *args, level=logging.INFO, **kwargs):
if not self.logger.isEnabledFor(level): if not self.logger.isEnabledFor(level):
return return
@ -238,14 +164,6 @@ class Environment(Model):
self.schedule.step() self.schedule.step()
self.datacollector.collect(self) self.datacollector.collect(self)
def run(self, until, *args, **kwargs):
until = until or float('inf')
while self.schedule.next_time < until:
self.step()
utils.logger.debug(f'Simulation step {self.schedule.time}/{until}. Next: {self.schedule.next_time}')
self.schedule.time = until
def __contains__(self, key): def __contains__(self, key):
return key in self.env_params return key in self.env_params
@ -289,5 +207,90 @@ class Environment(Model):
yield from self._agent_to_tuples(agent, now) yield from self._agent_to_tuples(agent, now)
class AgentConfigEnvironment(BaseEnvironment):
def __init__(self, *args,
agents: Dict[str, config.AgentConfig] = {},
**kwargs):
return super().__init__(*args, agents=agents, **kwargs)
def init_agents(self, agents: Union[Dict[str, config.AgentConfig], [tuple[type, Dict[str, Any]]]] = {}):
if not isinstance(agents, dict):
return BaseEnvironment.init_agents(self, agents)
self._agents = agents_from_config(agents,
env=self,
random=self.random)
for d in self._agents.values():
for a in d.values():
self.schedule.add(a)
class NetworkConfigEnvironment(BaseEnvironment):
def __init__(self, *args, topologies: Dict[str, config.NetConfig] = {}, **kwargs):
super().__init__(*args, **kwargs)
self.topologies = {}
self._node_ids = {}
for (name, cfg) in topologies.items():
self.set_topology(cfg=cfg, graph=name)
@property
def topology(self):
return self.topologies['default']
def set_topology(self, cfg=None, dir_path=None, graph='default'):
topology = cfg
if not isinstance(cfg, nx.Graph):
topology = network.from_config(cfg, dir_path=dir_path or self.dir_path)
self.topologies[graph] = topology
def topology_for(self, agent_id):
return self.topologies[self._node_ids[agent_id][0]]
@property
def network_agents(self):
yield from self.agents(agent_class=NetworkAgent)
def agent_to_node(self, agent_id, graph_name='default', node_id=None, shuffle=False):
node_id = network.agent_to_node(G=self.topologies[graph_name], agent_id=agent_id,
node_id=node_id, shuffle=shuffle,
random=self.random)
self._node_ids[agent_id] = (graph_name, node_id)
def add_node(self, agent_class, state=None, graph='default'):
agent_id = int(len(self.topologies[graph].nodes()))
self.topologies[graph].add_node(agent_id)
a = self.add_agent(agent_id, agent_class, state, graph=graph)
a['visible'] = True
return a
def add_edge(self, agent1, agent2, start=None, graph='default', **attrs):
if hasattr(agent1, 'id'):
agent1 = agent1.id
if hasattr(agent2, 'id'):
agent2 = agent2.id
start = start or self.now
return self.topologies[graph].add_edge(agent1, agent2, **attrs)
def add_agent(self, *args, state=None, graph='default', **kwargs):
node = self.topologies[graph].nodes[agent_id]
node_state = node.get('state', {})
if node_state:
node_state.update(state or {})
state = node_state
a = super().add_agent(*args, state=state, **kwargs)
node['agent'] = a
return a
def node_id_for(self, agent_id):
return self._node_ids[agent_id][1]
SoilEnvironment = Environment class Environment(AgentConfigEnvironment, NetworkConfigEnvironment):
def __init__(self, *args, **kwargs):
agents = kwargs.pop('agents', {})
NetworkConfigEnvironment.__init__(self, *args, **kwargs)
AgentConfigEnvironment.__init__(self, *args, agents=agents, **kwargs)

@ -49,8 +49,8 @@ class Exporter:
self.simulation = simulation self.simulation = simulation
outdir = outdir or os.path.join(os.getcwd(), 'soil_output') outdir = outdir or os.path.join(os.getcwd(), 'soil_output')
self.outdir = os.path.join(outdir, self.outdir = os.path.join(outdir,
simulation.config.general.group or '', simulation.group or '',
simulation.config.general.id) simulation.name)
self.dry_run = dry_run self.dry_run = dry_run
self.copy_to = copy_to self.copy_to = copy_to

@ -1,6 +1,7 @@
from typing import Dict from typing import Dict
import os import os
import sys import sys
import random
import networkx as nx import networkx as nx
@ -40,3 +41,25 @@ def from_config(cfg: config.NetConfig, dir_path: str = None):
return nx.json_graph.node_link_graph(cfg.topology) return nx.json_graph.node_link_graph(cfg.topology)
return nx.Graph() return nx.Graph()
def agent_to_node(G, agent_id, node_id=None, shuffle=False, random=random):
'''
Link an agent to a node in a topology.
If node_id is None, a node without an agent_id will be found.
'''
#TODO: test
if node_id is None:
candidates = list(G.nodes(data=True))
if shuffle:
random.shuffle(candidates)
for next_id, data in candidates:
if data.get('agent_id', None) is None:
node_id = next_id
data['agent_id'] = agent_id
break
if node_id is None:
raise ValueError(f"Not enough nodes in topology to assign one to agent {agent_id}")
return node_id

@ -122,8 +122,6 @@ def load_files(*patterns, **kwargs):
for i in glob(pattern, **kwargs): for i in glob(pattern, **kwargs):
for config in load_file(i): for config in load_file(i):
path = os.path.abspath(i) path = os.path.abspath(i)
if 'general' in config and 'dir_path' not in config['general']:
config['general']['dir_path'] = os.path.dirname(path)
yield config, path yield config, path

@ -7,6 +7,10 @@ import traceback
import logging import logging
import networkx as nx import networkx as nx
from dataclasses import dataclass, field, asdict
from typing import Union
from networkx.readwrite import json_graph from networkx.readwrite import json_graph
from multiprocessing import Pool from multiprocessing import Pool
from functools import partial from functools import partial
@ -14,13 +18,15 @@ import pickle
from . import serialization, utils, basestring, agents from . import serialization, utils, basestring, agents
from .environment import Environment from .environment import Environment
from .utils import logger from .utils import logger, run_and_return_exceptions
from .exporters import default from .exporters import default
from .time import INFINITY
from .config import Config, convert_old from .config import Config, convert_old
#TODO: change documentation for simulation #TODO: change documentation for simulation
@dataclass
class Simulation: class Simulation:
""" """
Parameters Parameters
@ -30,23 +36,16 @@ class Simulation:
kwargs: parameters to use to initialize a new configuration, if one has not been provided. kwargs: parameters to use to initialize a new configuration, if one has not been provided.
""" """
name: str = 'Unnamed simulation'
def __init__(self, config=None, group: str = None
**kwargs): model_class: Union[str, type] = 'soil.Environment'
if kwargs: model_params: dict = field(default_factory=dict)
cfg = {} seed: str = field(default_factory=lambda: current_time())
if config: dir_path: str = field(default_factory=lambda: os.getcwd())
cfg.update(config.dict(include_defaults=False)) max_time: float = float('inf')
cfg.update(kwargs) max_steps: int = -1
config = Config(**cfg) num_trials: int = 3
if not config: dry_run: bool = False
raise ValueError("You need to specify a simulation configuration")
self.config = config
@property
def name(self) -> str:
return self.config.general.id
def run_simulation(self, *args, **kwargs): def run_simulation(self, *args, **kwargs):
return self.run(*args, **kwargs) return self.run(*args, **kwargs)
@ -58,14 +57,14 @@ class Simulation:
def _run_sync_or_async(self, parallel=False, **kwargs): def _run_sync_or_async(self, parallel=False, **kwargs):
if parallel and not os.environ.get('SENPY_DEBUG', None): if parallel and not os.environ.get('SENPY_DEBUG', None):
p = Pool() p = Pool()
func = partial(self.run_trial_exceptions, **kwargs) func = partial(run_and_return_exceptions, self.run_trial, **kwargs)
for i in p.imap_unordered(func, range(self.config.general.num_trials)): for i in p.imap_unordered(func, self.num_trials):
if isinstance(i, Exception): if isinstance(i, Exception):
logger.error('Trial failed:\n\t%s', i.message) logger.error('Trial failed:\n\t%s', i.message)
continue continue
yield i yield i
else: else:
for i in range(self.config.general.num_trials): for i in range(self.num_trials):
yield self.run_trial(trial_id=i, yield self.run_trial(trial_id=i,
**kwargs) **kwargs)
@ -80,12 +79,12 @@ class Simulation:
logger.info('Output directory: %s', outdir) logger.info('Output directory: %s', outdir)
exporters = serialization.deserialize_all(exporters, exporters = serialization.deserialize_all(exporters,
simulation=self, simulation=self,
known_modules=['soil.exporters',], known_modules=['soil.exporters', ],
dry_run=dry_run, dry_run=dry_run,
outdir=outdir, outdir=outdir,
**exporter_params) **exporter_params)
with utils.timer('simulation {}'.format(self.config.general.id)): with utils.timer('simulation {}'.format(self.name)):
for exporter in exporters: for exporter in exporters:
exporter.sim_start() exporter.sim_start()
@ -104,95 +103,95 @@ class Simulation:
for exporter in exporters: for exporter in exporters:
exporter.sim_end() exporter.sim_end()
def run_model(self, until=None, *args, **kwargs):
until = until or float('inf')
while self.schedule.next_time < until:
self.step()
utils.logger.debug(f'Simulation step {self.schedule.time}/{until}. Next: {self.schedule.next_time}')
self.schedule.time = until
def get_env(self, trial_id=0, **kwargs): def get_env(self, trial_id=0, **kwargs):
'''Create an environment for a trial of the simulation''' '''Create an environment for a trial of the simulation'''
# opts = self.environment_params.copy() def deserialize_reporters(reporters):
# opts.update({ for (k, v) in reporters.items():
# 'name': '{}_trial_{}'.format(self.name, trial_id), if isinstance(v, str) and v.startswith('py:'):
# 'topology': self.topology.copy(), reporters[k] = serialization.deserialize(value.lsplit(':', 1)[1])
# 'network_params': self.network_params,
# 'seed': '{}_trial_{}'.format(self.seed, trial_id), model_params = self.model_params.copy()
# 'initial_time': 0, model_params.update(kwargs)
# 'interval': self.interval,
# 'network_agents': self.network_agents, agent_reporters = deserialize_reporters(model_params.pop('agent_reporters', {}))
# 'initial_time': 0, model_reporters = deserialize_reporters(model_params.pop('model_reporters', {}))
# 'states': self.states,
# 'dir_path': self.dir_path, env = serialization.deserialize(self.model_class)
# 'default_state': self.default_state, return env(id=f'{self.name}_trial_{trial_id}',
# 'history': bool(self._history), seed=f'{self.seed}_trial_{trial_id}',
# 'environment_agents': self.environment_agents, dir_path=self.dir_path,
# }) agent_reporters=agent_reporters,
# opts.update(kwargs) model_reporters=model_reporters,
print(self.config) **model_params)
env = Environment.from_config(self.config, trial_id=trial_id, **kwargs)
return env
def run_trial(self, trial_id=None, until=None, log_level=logging.INFO, **opts): def run_trial(self, trial_id=None, until=None, log_level=logging.INFO, **opts):
""" """
Run a single trial of the simulation Run a single trial of the simulation
""" """
model = self.get_env(trial_id, **opts)
return self.run_model(model, trial_id=trial_id, until=until, log_level=log_level)
def run_model(self, model, trial_id=None, until=None, log_level=logging.INFO, **opts):
trial_id = trial_id if trial_id is not None else current_time() trial_id = trial_id if trial_id is not None else current_time()
if log_level: if log_level:
logger.setLevel(log_level) logger.setLevel(log_level)
# Set-up trial environment and graph # Set-up trial environment and graph
until = until or self.config.general.max_time until = until or self.max_time
env = self.get_env(trial_id, **opts)
# Set up agents on nodes # Set up agents on nodes
with utils.timer('Simulation {} trial {}'.format(self.config.general.id, trial_id)): is_done = lambda: False
env.run(until) if self.max_time and hasattr(self.schedule, 'time'):
return env is_done = lambda x: is_done() or self.schedule.time >= self.max_time
if self.max_steps and hasattr(self.schedule, 'time'):
def run_trial_exceptions(self, *args, **kwargs): is_done = lambda: is_done() or self.schedule.steps >= self.max_steps
'''
A wrapper for run_trial that catches exceptions and returns them. with utils.timer('Simulation {} trial {}'.format(self.name, trial_id)):
It is meant for async simulations while not is_done():
''' utils.logger.debug(f'Simulation time {model.schedule.time}/{until}. Next: {getattr(model.schedule, "next_time", model.schedule.time + self.interval)}')
try: model.step()
return self.run_trial(*args, **kwargs) return model
except Exception as ex:
if ex.__cause__ is not None:
ex = ex.__cause__
ex.message = ''.join(traceback.format_exception(type(ex), ex, ex.__traceback__)[:])
return ex
def to_dict(self): def to_dict(self):
return self.config.dict() d = asdict(self)
d['model_class'] = serialization.serialize(d['model_class'])[0]
d['model_params'] = serialization.serialize(d['model_params'])[0]
d['dir_path'] = str(d['dir_path'])
return d
def to_yaml(self): def to_yaml(self):
return yaml.dump(self.config.dict()) return yaml.dump(self.asdict())
def all_from_config(config): def iter_from_config(config):
configs = list(serialization.load_config(config)) configs = list(serialization.load_config(config))
for config, path in configs: for config, path in configs:
if config.get('version', '1') == '1': d = dict(config)
config = convert_old(config) if 'dir_path' not in d:
if not isinstance(config, Config): d['dir_path'] = os.path.dirname(path)
config = Config(**config) if d.get('version', '2') == '1' or 'agents' in d or 'network_agents' in d or 'environment_agents' in d:
if not config.general.dir_path: d = convert_old(d)
config.general.dir_path = os.path.dirname(path) d.pop('version', None)
sim = Simulation(config=config) yield Simulation(**d)
yield sim
def from_config(conf_or_path): def from_config(conf_or_path):
lst = list(all_from_config(conf_or_path)) lst = list(iter_from_config(conf_or_path))
if len(lst) > 1: if len(lst) > 1:
raise AttributeError('Provide only one configuration') raise AttributeError('Provide only one configuration')
return lst[0] return lst[0]
def from_old_config(conf_or_path):
config = list(serialization.load_config(conf_or_path))
if len(config) > 1:
raise AttributeError('Provide only one configuration')
config = convert_old(config[0][0])
return Simulation(config)
def run_from_config(*configs, **kwargs): def run_from_config(*configs, **kwargs):
for sim in all_from_config(configs): for sim in iter_from_config(configs):
name = config.general.id logger.info(f"Using config(s): {sim.id}")
logger.info("Using config(s): {name}".format(name=name))
sim.run_simulation(**kwargs) sim.run_simulation(**kwargs)

@ -37,9 +37,10 @@ class TimedActivation(BaseScheduler):
""" """
def __init__(self, *args, **kwargs): def __init__(self, *args, **kwargs):
super().__init__(self) super().__init__(*args, **kwargs)
self._queue = [] self._queue = []
self.next_time = 0 self.next_time = 0
self.logger = logger.getChild(f'time_{ self.model }')
def add(self, agent: MesaAgent): def add(self, agent: MesaAgent):
if agent.unique_id not in self._agents: if agent.unique_id not in self._agents:
@ -52,7 +53,8 @@ class TimedActivation(BaseScheduler):
an agent will signal when it wants to be scheduled next. an agent will signal when it wants to be scheduled next.
""" """
if self.next_time == INFINITY: self.logger.debug(f'Simulation step {self.next_time}')
if not self.model.running:
return return
self.time = self.next_time self.time = self.next_time
@ -60,7 +62,7 @@ class TimedActivation(BaseScheduler):
while self._queue and self._queue[0][0] == self.time: while self._queue and self._queue[0][0] == self.time:
(when, agent_id) = heappop(self._queue) (when, agent_id) = heappop(self._queue)
logger.debug(f'Stepping agent {agent_id}') self.logger.debug(f'Stepping agent {agent_id}')
returned = self._agents[agent_id].step() returned = self._agents[agent_id].step()
when = (returned or Delta(1)).abs(self.time) when = (returned or Delta(1)).abs(self.time)
@ -74,7 +76,8 @@ class TimedActivation(BaseScheduler):
if not self._queue: if not self._queue:
self.time = INFINITY self.time = INFINITY
self.next_time = INFINITY self.next_time = INFINITY
self.model.running = False
return return
self.next_time = self._queue[0][0] self.next_time = self._queue[0][0]
self.logger.debug(f'Next step: {self.next_time}')

@ -1,6 +1,7 @@
import logging import logging
from time import time as current_time, strftime, gmtime, localtime from time import time as current_time, strftime, gmtime, localtime
import os import os
import traceback
from shutil import copyfile from shutil import copyfile
@ -89,3 +90,17 @@ def unflatten_dict(d):
target = target[token] target = target[token]
target[tokens[-1]] = v target[tokens[-1]] = v
return out return out
def run_and_return_exceptions(self, func, *args, **kwargs):
'''
A wrapper for run_trial that catches exceptions and returns them.
It is meant for async simulations.
'''
try:
return func(*args, **kwargs)
except Exception as ex:
if ex.__cause__ is not None:
ex = ex.__cause__
ex.message = ''.join(traceback.format_exception(type(ex), ex, ex.__traceback__)[:])
return ex

@ -6,11 +6,11 @@ network_params:
n: 100 n: 100
m: 2 m: 2
network_agents: network_agents:
- agent_type: ControlModelM2 - agent_class: ControlModelM2
weight: 0.1 weight: 0.1
state: state:
id: 1 id: 1
- agent_type: ControlModelM2 - agent_class: ControlModelM2
weight: 0.9 weight: 0.9
state: state:
id: 0 id: 0

@ -10,21 +10,21 @@ network_params:
generator: complete_graph generator: complete_graph
n: 10 n: 10
network_agents: network_agents:
- agent_type: CounterModel - agent_class: CounterModel
weight: 0.4 weight: 0.4
state: state:
state_id: 0 state_id: 0
- agent_type: AggregatedCounter - agent_class: AggregatedCounter
weight: 0.6 weight: 0.6
environment_agents: environment_agents:
- agent_id: 'Environment Agent 1' - agent_id: 'Environment Agent 1'
agent_type: CounterModel agent_class: CounterModel
state: state:
times: 10 times: 10
environment_class: Environment environment_class: Environment
environment_params: environment_params:
am_i_complete: true am_i_complete: true
agent_type: CounterModel agent_class: CounterModel
default_state: default_state:
times: 1 times: 1
states: states:

@ -46,7 +46,7 @@ class TestAnalysis(TestCase):
'generator': 'complete_graph', 'generator': 'complete_graph',
'n': 2 'n': 2
}, },
'agent_type': Ping, 'agent_class': Ping,
'states': [{'interval': 1}, {'interval': 2}], 'states': [{'interval': 1}, {'interval': 2}],
'max_time': 30, 'max_time': 30,
'num_trials': 1, 'num_trials': 1,

@ -1,8 +1,10 @@
from unittest import TestCase from unittest import TestCase
import os import os
import yaml
import copy
from os.path import join from os.path import join
from soil import simulation, serialization, config, network, agents from soil import simulation, serialization, config, network, agents, utils
ROOT = os.path.abspath(os.path.dirname(__file__)) ROOT = os.path.abspath(os.path.dirname(__file__))
EXAMPLES = join(ROOT, '..', 'examples') EXAMPLES = join(ROOT, '..', 'examples')
@ -10,6 +12,17 @@ EXAMPLES = join(ROOT, '..', 'examples')
FORCE_TESTS = os.environ.get('FORCE_TESTS', '') FORCE_TESTS = os.environ.get('FORCE_TESTS', '')
def isequal(a, b):
if isinstance(a, dict):
for (k, v) in a.items():
if v:
isequal(a[k], b[k])
else:
assert not b.get(k, None)
return
assert a == b
class TestConfig(TestCase): class TestConfig(TestCase):
def test_conversion(self): def test_conversion(self):
@ -18,18 +31,23 @@ class TestConfig(TestCase):
converted_defaults = config.convert_old(old, strict=False) converted_defaults = config.convert_old(old, strict=False)
converted = converted_defaults.dict(skip_defaults=True) converted = converted_defaults.dict(skip_defaults=True)
def isequal(a, b):
if isinstance(a, dict):
for (k, v) in a.items():
if v:
isequal(a[k], b[k])
else:
assert not b.get(k, None)
return
assert a == b
isequal(converted, expected) isequal(converted, expected)
def test_configuration_changes(self):
"""
The configuration should not change after running
the simulation.
"""
config = serialization.load_file(join(EXAMPLES, 'complete.yml'))[0]
s = simulation.from_config(config)
init_config = copy.copy(s.config)
s.run_simulation(dry_run=True)
nconfig = s.config
# del nconfig['to
isequal(init_config, nconfig)
def test_topology_config(self): def test_topology_config(self):
netconfig = config.NetConfig(**{ netconfig = config.NetConfig(**{
'path': join(ROOT, 'test.gexf') 'path': join(ROOT, 'test.gexf')
@ -48,7 +66,7 @@ class TestConfig(TestCase):
'network_params': { 'network_params': {
'path': join(ROOT, 'test.gexf') 'path': join(ROOT, 'test.gexf')
}, },
'agent_type': 'CounterModel', 'agent_class': 'CounterModel',
# 'states': [{'times': 10}, {'times': 20}], # 'states': [{'times': 10}, {'times': 20}],
'max_time': 2, 'max_time': 2,
'dry_run': True, 'dry_run': True,
@ -63,7 +81,6 @@ class TestConfig(TestCase):
assert len(env.agents) == 2 assert len(env.agents) == 2
assert env.agents[0].topology == env.topologies['default'] assert env.agents[0].topology == env.topologies['default']
def test_agents_from_config(self): def test_agents_from_config(self):
'''We test that the known complete configuration produces '''We test that the known complete configuration produces
the right agents in the right groups''' the right agents in the right groups'''
@ -74,8 +91,25 @@ class TestConfig(TestCase):
assert len(env.agents(group='network')) == 10 assert len(env.agents(group='network')) == 10
assert len(env.agents(group='environment')) == 1 assert len(env.agents(group='environment')) == 1
assert sum(1 for a in env.agents(group='network', agent_type=agents.CounterModel)) == 4 assert sum(1 for a in env.agents(group='network', agent_class=agents.CounterModel)) == 4
assert sum(1 for a in env.agents(group='network', agent_type=agents.AggregatedCounter)) == 6 assert sum(1 for a in env.agents(group='network', agent_class=agents.AggregatedCounter)) == 6
def test_yaml(self):
"""
The YAML version of a newly created configuration should be equivalent
to the configuration file used.
Values not present in the original config file should have reasonable
defaults.
"""
with utils.timer('loading'):
config = serialization.load_file(join(EXAMPLES, 'complete.yml'))[0]
s = simulation.from_config(config)
with utils.timer('serializing'):
serial = s.to_yaml()
with utils.timer('recovering'):
recovered = yaml.load(serial, Loader=yaml.SafeLoader)
for (k, v) in config.items():
assert recovered[k] == v
def make_example_test(path, cfg): def make_example_test(path, cfg):
def wrapped(self): def wrapped(self):

@ -36,7 +36,7 @@ class Exporters(TestCase):
config = { config = {
'name': 'exporter_sim', 'name': 'exporter_sim',
'network_params': {}, 'network_params': {},
'agent_type': 'CounterModel', 'agent_class': 'CounterModel',
'max_time': 2, 'max_time': 2,
'num_trials': 5, 'num_trials': 5,
'environment_params': {} 'environment_params': {}
@ -62,7 +62,7 @@ class Exporters(TestCase):
'generator': 'complete_graph', 'generator': 'complete_graph',
'n': 4 'n': 4
}, },
'agent_type': 'CounterModel', 'agent_class': 'CounterModel',
'max_time': 2, 'max_time': 2,
'num_trials': n_trials, 'num_trials': n_trials,
'dry_run': False, 'dry_run': False,

@ -41,7 +41,7 @@ class TestHistory(TestCase):
'path': join(ROOT, 'test.gexf') 'path': join(ROOT, 'test.gexf')
}, },
'network_agents': [{ 'network_agents': [{
'agent_type': 'AggregatedCounter', 'agent_class': 'AggregatedCounter',
'weight': 1, 'weight': 1,
'state': {'state_id': 0} 'state': {'state_id': 0}

@ -1,9 +1,6 @@
from unittest import TestCase from unittest import TestCase
import os import os
import io
import yaml
import copy
import pickle import pickle
import networkx as nx import networkx as nx
from functools import partial from functools import partial
@ -29,56 +26,17 @@ class CustomAgent(agents.FSM, agents.NetworkAgent):
class TestMain(TestCase): class TestMain(TestCase):
def test_load_graph(self):
"""
Load a graph from file if the extension is known.
Raise an exception otherwise.
"""
config = {
'network_params': {
'path': join(ROOT, 'test.gexf')
}
}
G = network.from_config(config['network_params'])
assert G
assert len(G) == 2
with self.assertRaises(AttributeError):
config = {
'network_params': {
'path': join(ROOT, 'unknown.extension')
}
}
G = network.from_config(config['network_params'])
print(G)
def test_generate_barabasi(self):
"""
If no path is given, a generator and network parameters
should be used to generate a network
"""
cfg = {
'params': {
'generator': 'barabasi_albert_graph'
}
}
with self.assertRaises(Exception):
G = network.from_config(cfg)
cfg['params']['n'] = 100
cfg['params']['m'] = 10
G = network.from_config(cfg)
assert len(G) == 100
def test_empty_simulation(self): def test_empty_simulation(self):
"""A simulation with a base behaviour should do nothing""" """A simulation with a base behaviour should do nothing"""
config = { config = {
'network_params': { 'model_params': {
'path': join(ROOT, 'test.gexf') 'network_params': {
}, 'path': join(ROOT, 'test.gexf')
'agent_type': 'BaseAgent', },
'environment_params': { 'agent_class': 'BaseAgent',
} }
} }
s = simulation.from_old_config(config) s = simulation.from_config(config)
s.run_simulation(dry_run=True) s.run_simulation(dry_run=True)
@ -88,21 +46,21 @@ class TestMain(TestCase):
agent should be able to update its state.""" agent should be able to update its state."""
config = { config = {
'name': 'CounterAgent', 'name': 'CounterAgent',
'network_params': {
'generator': nx.complete_graph,
'n': 2,
},
'agent_type': 'CounterModel',
'states': {
0: {'times': 10},
1: {'times': 20},
},
'max_time': 2,
'num_trials': 1, 'num_trials': 1,
'environment_params': { 'max_time': 2,
'model_params': {
'network_params': {
'generator': nx.complete_graph,
'n': 2,
},
'agent_class': 'CounterModel',
'states': {
0: {'times': 10},
1: {'times': 20},
},
} }
} }
s = simulation.from_old_config(config) s = simulation.from_config(config)
def test_counter_agent(self): def test_counter_agent(self):
""" """
@ -110,24 +68,24 @@ class TestMain(TestCase):
agent should be able to update its state.""" agent should be able to update its state."""
config = { config = {
'version': '2', 'version': '2',
'general': { 'name': 'CounterAgent',
'name': 'CounterAgent', 'dry_run': True,
'max_time': 2, 'num_trials': 1,
'dry_run': True, 'max_time': 2,
'num_trials': 1, 'model_params': {
}, 'topologies': {
'topologies': { 'default': {
'default': { 'path': join(ROOT, 'test.gexf')
'path': join(ROOT, 'test.gexf') }
}
},
'agents': {
'default': {
'agent_class': 'CounterModel',
}, },
'counters': { 'agents': {
'topology': 'default', 'default': {
'fixed': [{'state': {'times': 10}}, {'state': {'times': 20}}], 'agent_class': 'CounterModel',
},
'counters': {
'topology': 'default',
'fixed': [{'state': {'times': 10}}, {'state': {'times': 20}}],
}
} }
} }
} }
@ -141,33 +99,37 @@ class TestMain(TestCase):
assert env.agents[0]['times'] == 11 assert env.agents[0]['times'] == 11
assert env.agents[1]['times'] == 21 assert env.agents[1]['times'] == 21
def test_custom_agent(self): def test_init_and_count_agents(self):
"""Allow for search of neighbors with a certain state_id""" """Agents should be properly initialized and counting should filter them properly"""
#TODO: separate this test into two or more test cases
config = { config = {
'network_params': {
'path': join(ROOT, 'test.gexf')
},
'network_agents': [{
'agent_type': CustomAgent,
'weight': 1
}],
'max_time': 10, 'max_time': 10,
'environment_params': { 'model_params': {
} 'agents': [(CustomAgent, {'weight': 1}),
(CustomAgent, {'weight': 3}),
],
'topologies': {
'default': {
'path': join(ROOT, 'test.gexf')
}
},
},
} }
s = simulation.from_old_config(config) s = simulation.from_config(config)
env = s.run_simulation(dry_run=True)[0] env = s.run_simulation(dry_run=True)[0]
assert env.agents[1].count_agents(state_id='normal') == 2 assert env.agents[0].weight == 1
assert env.agents[1].count_agents(state_id='normal', limit_neighbors=True) == 1 assert env.count_agents() == 2
assert env.agents[0].neighbors == 1 assert env.count_agents(weight=1) == 1
assert env.count_agents(weight=3) == 1
assert env.count_agents(agent_class=CustomAgent) == 2
def test_torvalds_example(self): def test_torvalds_example(self):
"""A complete example from a documentation should work.""" """A complete example from a documentation should work."""
config = serialization.load_file(join(EXAMPLES, 'torvalds.yml'))[0] config = serialization.load_file(join(EXAMPLES, 'torvalds.yml'))[0]
config['network_params']['path'] = join(EXAMPLES, config['model_params']['network_params']['path'] = join(EXAMPLES,
config['network_params']['path']) config['network_params']['path'])
s = simulation.from_old_config(config) s = simulation.from_config(config)
env = s.run_simulation(dry_run=True)[0] env = s.run_simulation(dry_run=True)[0]
for a in env.network_agents: for a in env.network_agents:
skill_level = a.state['skill_level'] skill_level = a.state['skill_level']
@ -184,47 +146,6 @@ class TestMain(TestCase):
assert a.state['total'] == 3 assert a.state['total'] == 3
assert a.state['neighbors'] == 1 assert a.state['neighbors'] == 1
def test_yaml(self):
"""
The YAML version of a newly created configuration should be equivalent
to the configuration file used.
Values not present in the original config file should have reasonable
defaults.
"""
with utils.timer('loading'):
config = serialization.load_file(join(EXAMPLES, 'complete.yml'))[0]
s = simulation.from_old_config(config)
with utils.timer('serializing'):
serial = s.to_yaml()
with utils.timer('recovering'):
recovered = yaml.load(serial, Loader=yaml.SafeLoader)
for (k, v) in config.items():
assert recovered[k] == v
def test_configuration_changes(self):
"""
The configuration should not change after running
the simulation.
"""
config = serialization.load_file(join(EXAMPLES, 'complete.yml'))[0]
s = simulation.from_old_config(config)
init_config = copy.copy(s.config)
s.run_simulation(dry_run=True)
nconfig = s.config
# del nconfig['to
assert init_config == nconfig
def test_save_geometric(self):
"""
There is a bug in networkx that prevents it from creating a GEXF file
from geometric models. We should work around it.
"""
G = nx.random_geometric_graph(20, 0.1)
env = Environment(topology=G)
f = io.BytesIO()
env.dump_gexf(f)
def test_serialize_class(self): def test_serialize_class(self):
ser, name = serialization.serialize(agents.BaseAgent, known_modules=[]) ser, name = serialization.serialize(agents.BaseAgent, known_modules=[])
assert name == 'soil.agents.BaseAgent' assert name == 'soil.agents.BaseAgent'
@ -247,7 +168,7 @@ class TestMain(TestCase):
des = serialization.deserialize(name, ser) des = serialization.deserialize(name, ser)
assert i == des assert i == des
def test_serialize_agent_type(self): def test_serialize_agent_class(self):
'''A class from soil.agents should be serialized without the module part''' '''A class from soil.agents should be serialized without the module part'''
ser = agents.serialize_type(CustomAgent) ser = agents.serialize_type(CustomAgent)
assert ser == 'test_main.CustomAgent' assert ser == 'test_main.CustomAgent'
@ -258,33 +179,33 @@ class TestMain(TestCase):
def test_deserialize_agent_distribution(self): def test_deserialize_agent_distribution(self):
agent_distro = [ agent_distro = [
{ {
'agent_type': 'CounterModel', 'agent_class': 'CounterModel',
'weight': 1 'weight': 1
}, },
{ {
'agent_type': 'test_main.CustomAgent', 'agent_class': 'test_main.CustomAgent',
'weight': 2 'weight': 2
}, },
] ]
converted = agents.deserialize_definition(agent_distro) converted = agents.deserialize_definition(agent_distro)
assert converted[0]['agent_type'] == agents.CounterModel assert converted[0]['agent_class'] == agents.CounterModel
assert converted[1]['agent_type'] == CustomAgent assert converted[1]['agent_class'] == CustomAgent
pickle.dumps(converted) pickle.dumps(converted)
def test_serialize_agent_distribution(self): def test_serialize_agent_distribution(self):
agent_distro = [ agent_distro = [
{ {
'agent_type': agents.CounterModel, 'agent_class': agents.CounterModel,
'weight': 1 'weight': 1
}, },
{ {
'agent_type': CustomAgent, 'agent_class': CustomAgent,
'weight': 2 'weight': 2
}, },
] ]
converted = agents.serialize_definition(agent_distro) converted = agents.serialize_definition(agent_distro)
assert converted[0]['agent_type'] == 'CounterModel' assert converted[0]['agent_class'] == 'CounterModel'
assert converted[1]['agent_type'] == 'test_main.CustomAgent' assert converted[1]['agent_class'] == 'test_main.CustomAgent'
pickle.dumps(converted) pickle.dumps(converted)
def test_subgraph(self): def test_subgraph(self):
@ -292,7 +213,7 @@ class TestMain(TestCase):
G = nx.Graph() G = nx.Graph()
G.add_node(3) G.add_node(3)
G.add_edge(1, 2) G.add_edge(1, 2)
distro = agents.calculate_distribution(agent_type=agents.NetworkAgent) distro = agents.calculate_distribution(agent_class=agents.NetworkAgent)
distro[0]['topology'] = 'default' distro[0]['topology'] = 'default'
aconfig = config.AgentConfig(distribution=distro, topology='default') aconfig = config.AgentConfig(distribution=distro, topology='default')
env = Environment(name='Test', topologies={'default': G}, agents={'network': aconfig}) env = Environment(name='Test', topologies={'default': G}, agents={'network': aconfig})
@ -303,7 +224,7 @@ class TestMain(TestCase):
assert len(a2.subgraph(limit_neighbors=True)) == 2 assert len(a2.subgraph(limit_neighbors=True)) == 2
assert len(a3.subgraph(limit_neighbors=True)) == 1 assert len(a3.subgraph(limit_neighbors=True)) == 1
assert len(a3.subgraph(limit_neighbors=True, center=False)) == 0 assert len(a3.subgraph(limit_neighbors=True, center=False)) == 0
assert len(a3.subgraph(agent_type=agents.NetworkAgent)) == 3 assert len(a3.subgraph(agent_class=agents.NetworkAgent)) == 3
def test_templates(self): def test_templates(self):
'''Loading a template should result in several configs''' '''Loading a template should result in several configs'''
@ -313,19 +234,19 @@ class TestMain(TestCase):
def test_until(self): def test_until(self):
config = { config = {
'name': 'until_sim', 'name': 'until_sim',
'network_params': {}, 'model_params': {
'agent_type': 'CounterModel', 'network_params': {},
'agent_class': 'CounterModel',
},
'max_time': 2, 'max_time': 2,
'num_trials': 50, 'num_trials': 50,
'environment_params': {}
} }
s = simulation.from_old_config(config) s = simulation.from_config(config)
runs = list(s.run_simulation(dry_run=True)) runs = list(s.run_simulation(dry_run=True))
over = list(x.now for x in runs if x.now>2) over = list(x.now for x in runs if x.now>2)
assert len(runs) == config['num_trials'] assert len(runs) == config['num_trials']
assert len(over) == 0 assert len(over) == 0
def test_fsm(self): def test_fsm(self):
'''Basic state change''' '''Basic state change'''
class ToggleAgent(agents.FSM): class ToggleAgent(agents.FSM):

@ -0,0 +1,85 @@
from unittest import TestCase
import io
import os
import networkx as nx
from os.path import join
from soil import network, environment
ROOT = os.path.abspath(os.path.dirname(__file__))
EXAMPLES = join(ROOT, '..', 'examples')
class TestNetwork(TestCase):
def test_load_graph(self):
"""
Load a graph from file if the extension is known.
Raise an exception otherwise.
"""
config = {
'network_params': {
'path': join(ROOT, 'test.gexf')
}
}
G = network.from_config(config['network_params'])
assert G
assert len(G) == 2
with self.assertRaises(AttributeError):
config = {
'network_params': {
'path': join(ROOT, 'unknown.extension')
}
}
G = network.from_config(config['network_params'])
print(G)
def test_generate_barabasi(self):
"""
If no path is given, a generator and network parameters
should be used to generate a network
"""
cfg = {
'params': {
'generator': 'barabasi_albert_graph'
}
}
with self.assertRaises(Exception):
G = network.from_config(cfg)
cfg['params']['n'] = 100
cfg['params']['m'] = 10
G = network.from_config(cfg)
assert len(G) == 100
def test_save_geometric(self):
"""
There is a bug in networkx that prevents it from creating a GEXF file
from geometric models. We should work around it.
"""
G = nx.random_geometric_graph(20, 0.1)
env = environment.NetworkEnvironment(topology=G)
f = io.BytesIO()
env.dump_gexf(f)
def test_custom_agent_neighbors(self):
"""Allow for search of neighbors with a certain state_id"""
config = {
'network_params': {
'path': join(ROOT, 'test.gexf')
},
'network_agents': [{
'agent_class': CustomAgent,
'weight': 1
}],
'max_time': 10,
'environment_params': {
}
}
s = simulation.from_config(config)
env = s.run_simulation(dry_run=True)[0]
assert env.agents[1].count_agents(state_id='normal') == 2
assert env.agents[1].count_agents(state_id='normal', limit_neighbors=True) == 1
assert env.agents[0].neighbors == 1
Loading…
Cancel
Save