Pre-release version of v1.0

mesa
J. Fernando Sánchez 1 year ago
parent be65592055
commit cc238d84ec

@ -3,19 +3,31 @@ All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html). The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## [0.30 UNRELEASED] ## [1.0 UNRELEASED]
Version 1.0 introduced multiple changes, especially on the `Simulation` class and anything related to how configuration is handled.
For an explanation of the general changes in version 1.0, please refer to the file `docs/notes_v1.0.rst`.
### Added ### Added
* Simple debugging capabilities in `soil.debugging`, with a custom `pdb.Debugger` subclass that exposes commands to list agents and their status and set breakpoints on states (for FSM agents). Try it with `soil --debug <simulation file>`
* Ability to run mesa simulations
* The `soil.exporters` module to export the results of datacollectors (model.datacollector) into files at the end of trials/simulations
* A modular set of classes for environments/models. Now the ability to configure the agents through an agent definition and a topology through a network configuration is split into two classes (`soil.agents.BaseEnvironment` for agents, `soil.agents.NetworkEnvironment` to add topology). * A modular set of classes for environments/models. Now the ability to configure the agents through an agent definition and a topology through a network configuration is split into two classes (`soil.agents.BaseEnvironment` for agents, `soil.agents.NetworkEnvironment` to add topology).
* FSM agents can now have generators as states. They work similar to normal states, with one caveat. Only `time` values can be yielded, not a state. This is because the state will not change, it will be resumed after the yield, at the appropriate time. The return value *can* be a state, or a `(state, time)` tuple, just like in normal states. * Environments now have a class method to make them easier to use without a simulation`.run`. Notice that this is different from `run_model`, which is an instance method.
* Ability to run simulations using mesa models
* The `soil.exporters` module to export the results of datacollectors (`model.datacollector`) into files at the end of trials/simulations
* Agents can now have generators as a step function or a state. They work similar to normal functions, with one caveat in the case of `FSM`: only `time` values (or None) can be yielded, not a state. This is because the state will not change, it will be resumed after the yield, at the appropriate time. The return value *can* be a state, or a `(state, time)` tuple, just like in normal states.
* Simulations can now specify a `matrix` with possible values for every simulation parameter. The final parameters will be calculated based on the `parameters` used and a cartesian product (i.e., all possible combinations) of each parameter.
* Simple debugging capabilities in `soil.debugging`, with a custom `pdb.Debugger` subclass that exposes commands to list agents and their status and set breakpoints on states (for FSM agents). Try it with `soil --debug <simulation file>`
### Changed ### Changed
* Configuration schema is very simplified * Configuration schema (`Simulation`) is very simplified. All simulations should be checked
* Model / environment variables are expected (but not enforced) to be a single value. This is done to more closely align with mesa
* `Exporter.iteration_end` now takes two parameters: `env` (same as before) and `params` (specific parameters for this environment). We considered including a `parameters` attribute in the environment, but this would not be compatible with mesa.
* `num_trials` renamed to `iterations`
* General renaming of `trial` to `iteration`, to work better with `mesa`
* `model_parameters` renamed to `parameters` in simulation
* Simulation results for every iteration of a simulation with the same name are stored in a single `sqlite` database
### Removed ### Removed
* Any `tsih` and `History` integration in the main classes. To record the state of environments/agents, just use a datacollector. In some cases this may be slower or consume more memory than the previous system. However, few cases actually used the full potential of the history, and it came at the cost of unnecessary complexity and worse performance for the majority of cases. * Any `tsih` and `History` integration in the main classes. To record the state of environments/agents, just use a datacollector. In some cases this may be slower or consume more memory than the previous system. However, few cases actually used the full potential of the history, and it came at the cost of unnecessary complexity and worse performance for the majority of cases.
## [0.20.7] ## [0.20.7]
### Changed ### Changed
* Creating a `time.When` from another `time.When` does not nest them anymore (it returns the argument) * Creating a `time.When` from another `time.When` does not nest them anymore (it returns the argument)

@ -7,7 +7,7 @@ Learn how to run your own simulations with our [documentation](http://soilsim.re
Follow our [tutorial](examples/tutorial/soil_tutorial.ipynb) to develop your own agent models. Follow our [tutorial](examples/tutorial/soil_tutorial.ipynb) to develop your own agent models.
> **Warning** > **Warning**
> Mesa 0.30 introduced many fundamental changes. Check the [documention on how to update your simulations to work with newer versions](docs/notes_v0.30.rst) > Soil 1.0 introduced many fundamental changes. Check the [documention on how to update your simulations to work with newer versions](docs/notes_v1.0.rst)
## Features ## Features

@ -1,7 +1,20 @@
Welcome to Soil's documentation! Welcome to Soil's documentation!
================================ ================================
Soil is an Agent-based Social Simulator in Python focused on Social Networks. Soil is an opinionated Agent-based Social Simulator in Python focused on Social Networks.
.. image:: soil.png
:width: 80%
:align: center
Soil can be installed through pip (see more details in the :doc:`installation` page):
.. code:: bash
pip install soil
To get started developing your own simulations and agent behaviors, check out our :doc:`Tutorial <soil_tutorial>` and the `examples on GitHub <https://github.com/gsi-upm/soil/tree/master/examples>.
If you use Soil in your research, do not forget to cite this paper: If you use Soil in your research, do not forget to cite this paper:
@ -33,8 +46,6 @@ If you use Soil in your research, do not forget to cite this paper:
:caption: Learn more about soil: :caption: Learn more about soil:
installation installation
quickstart
configuration
Tutorial <soil_tutorial> Tutorial <soil_tutorial>
.. ..

@ -1,7 +1,10 @@
Installation Installation
------------ ------------
The easiest way to install Soil is through pip, with Python >= 3.4: Through pip
===========
The easiest way to install Soil is through pip, with Python >= 3.8:
.. code:: bash .. code:: bash
@ -25,4 +28,38 @@ Or, if you're using using soil programmatically:
import soil import soil
print(soil.__version__) print(soil.__version__)
The latest version can be installed through `GitHub <https://github.com/gsi-upm/soil>`_ or `GitLab <https://lab.gsi.upm.es/soil/soil.git>`_.
Web UI
======
Soil also includes a web server that allows you to upload your simulations, change parameters, and visualize the results, including a timeline of the network.
To make it work, you have to install soil like this:
.. code::
pip install soil[web]
Once installed, the soil web UI can be run in two ways:
.. code::
soil-web
# OR
python -m soil.web
Development
===========
The latest version can be downloaded from `GitHub <https://github.com/gsi-upm/soil>`_ and installed manually:
.. code:: bash
git clone https://github.com/gsi-upm/soil
cd soil
python -m venv .venv
source .venv/bin/activate
pip install --editable .

@ -1,7 +1,7 @@
What are the main changes between version 0.3 and 0.2? What are the main changes in version 1.0?
###################################################### #########################################
Version 0.3 is a major rewrite of the Soil system, focused on simplifying the API, aligning it with Mesa, and making it easier to use. Version 1.0 is a major rewrite of the Soil system, focused on simplifying the API, aligning it with Mesa, and making it easier to use.
Unfortunately, this comes at the cost of backwards compatibility. Unfortunately, this comes at the cost of backwards compatibility.
We drew several lessons from the previous version of Soil, and tried to address them in this version. We drew several lessons from the previous version of Soil, and tried to address them in this version.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 7.0 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 4.7 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 11 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 26 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 31 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 14 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 14 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 14 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 14 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 16 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 16 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 15 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 15 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 16 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 16 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 16 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 16 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 13 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 13 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 13 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 13 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 13 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 13 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 13 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 13 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 13 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 13 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 15 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 14 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 14 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 5.3 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 17 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 17 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 16 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 11 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 19 KiB

@ -1,93 +0,0 @@
Quickstart
----------
This section shows how to run your first simulation with Soil.
For installation instructions, see :doc:`installation`.
There are mainly two parts in a simulation: agent classes and simulation configuration.
An agent class defines how the agent will behave throughout the simulation.
The configuration includes things such as number of agents to use and their type, network topology to use, etc.
.. image:: soil.png
:width: 80%
:align: center
Soil includes several agent classes in the ``soil.agents`` module, and we will use them in this quickstart.
If you are interested in developing your own agents classes, see :doc:`soil_tutorial`.
Configuration
=============
To get you started, we will use this configuration (:download:`download the file <quickstart.yml>` directly):
.. literalinclude:: quickstart.yml
:language: yaml
The agent type used, SISa, is a very simple model.
It only has three states (neutral, content and discontent),
Its parameters are the probabilities to change from one state to another, either spontaneously or because of contagion from neighboring agents.
Running the simulation
======================
To see the simulation in action, simply point soil to the configuration, and tell it to store the graph and the history of agent states and environment parameters at every point.
.. code::
soil --graph --csv quickstart.yml [13:35:29]
INFO:soil:Using config(s): quickstart
INFO:soil:Dumping results to soil_output/quickstart : ['csv', 'gexf']
INFO:soil:Starting simulation quickstart at 13:35:30.
INFO:soil:Starting Simulation quickstart trial 0 at 13:35:30.
INFO:soil:Finished Simulation quickstart trial 0 at 13:35:49 in 19.43677067756653 seconds
INFO:soil:Starting Dumping simulation quickstart trial 0 at 13:35:49.
INFO:soil:Finished Dumping simulation quickstart trial 0 at 13:35:51 in 1.7733407020568848 seconds
INFO:soil:Dumping results to soil_output/quickstart
INFO:soil:Finished simulation quickstart at 13:35:51 in 21.29862952232361 seconds
The ``CSV`` file should look like this:
.. code::
agent_id,t_step,key,value
env,0,neutral_discontent_spon_prob,0.05
env,0,neutral_discontent_infected_prob,0.1
env,0,neutral_content_spon_prob,0.2
env,0,neutral_content_infected_prob,0.4
env,0,discontent_neutral,0.2
env,0,discontent_content,0.05
env,0,content_discontent,0.05
env,0,variance_d_c,0.05
env,0,variance_c_d,0.1
Results and visualization
=========================
The environment variables are marked as ``agent_id`` env.
Th exported values are only stored when they change.
To find out how to get every key and value at every point in the simulation, check out the :doc:`soil_tutorial`.
The dynamic graph is exported as a .gexf file which could be visualized with
`Gephi <https://gephi.org/users/download/>`__.
Now it is your turn to experiment with the simulation.
Change some of the parameters, such as the number of agents, the probability of becoming content, or the type of network, and see how the results change.
Soil also includes a web server that allows you to upload your simulations, change parameters, and visualize the results, including a timeline of the network.
To make it work, you have to install soil like this:
.. code::
pip install soil[web]
Once installed, the soil web UI can be run in two ways:
.. code::
soil-web
# OR
python -m soil.web

@ -1,33 +0,0 @@
---
name: quickstart
num_trials: 1
max_time: 1000
model_params:
agents:
- agent_class: SISaModel
topology: true
state:
id: neutral
weight: 1
- agent_class: SISaModel
topology: true
state:
id: content
weight: 2
topology:
params:
n: 100
k: 5
p: 0.2
generator: newman_watts_strogatz_graph
neutral_discontent_spon_prob: 0.05
neutral_discontent_infected_prob: 0.1
neutral_content_spon_prob: 0.2
neutral_content_infected_prob: 0.4
discontent_neutral: 0.2
discontent_content: 0.05
content_discontent: 0.05
variance_d_c: 0.05
variance_c_d: 0.1
content_neutral: 0.1
standard_variance: 0.1

File diff suppressed because it is too large Load Diff

@ -94,9 +94,9 @@ class Driver(Evented, FSM):
def check_passengers(self): def check_passengers(self):
"""If there are no more passengers, stop forever""" """If there are no more passengers, stop forever"""
c = self.count_agents(agent_class=Passenger) c = self.count_agents(agent_class=Passenger)
self.info(f"Passengers left {c}") self.debug(f"Passengers left {c}")
if not c: if not c:
self.die() self.die("No more passengers")
@default_state @default_state
@state @state
@ -129,10 +129,13 @@ class Driver(Evented, FSM):
@state @state
def driving(self): def driving(self):
"""The journey has been accepted. Pick them up and take them to their destination""" """The journey has been accepted. Pick them up and take them to their destination"""
self.info(f"Driving towards Passenger {self.journey.passenger.unique_id}")
while self.move_towards(self.journey.origin): while self.move_towards(self.journey.origin):
yield yield
self.info(f"Driving {self.journey.passenger.unique_id} from {self.journey.origin} to {self.journey.destination}")
while self.move_towards(self.journey.destination, with_passenger=True): while self.move_towards(self.journey.destination, with_passenger=True):
yield yield
self.info("Arrived at destination")
self.earnings += self.journey.tip self.earnings += self.journey.tip
self.model.total_earnings += self.journey.tip self.model.total_earnings += self.journey.tip
self.check_passengers() self.check_passengers()
@ -140,7 +143,7 @@ class Driver(Evented, FSM):
def move_towards(self, target, with_passenger=False): def move_towards(self, target, with_passenger=False):
"""Move one cell at a time towards a target""" """Move one cell at a time towards a target"""
self.info(f"Moving { self.pos } -> { target }") self.debug(f"Moving { self.pos } -> { target }")
if target[0] == self.pos[0] and target[1] == self.pos[1]: if target[0] == self.pos[0] and target[1] == self.pos[1]:
return False return False
@ -174,8 +177,8 @@ class Passenger(Evented, FSM):
@state @state
def asking(self): def asking(self):
destination = ( destination = (
self.random.randint(0, self.model.grid.height), self.random.randint(0, self.model.grid.height-1),
self.random.randint(0, self.model.grid.width), self.random.randint(0, self.model.grid.width-1),
) )
self.journey = None self.journey = None
journey = Journey( journey = Journey(
@ -187,19 +190,21 @@ class Passenger(Evented, FSM):
timeout = 60 timeout = 60
expiration = self.now + timeout expiration = self.now + timeout
self.info(f"Asking for journey at: { self.pos }")
self.model.broadcast(journey, ttl=timeout, sender=self, agent_class=Driver) self.model.broadcast(journey, ttl=timeout, sender=self, agent_class=Driver)
while not self.journey: while not self.journey:
self.info(f"Passenger at: { self.pos }. Checking for responses.") self.debug(f"Waiting for responses at: { self.pos }")
try: try:
# This will call check_messages behind the scenes, and the agent's status will be updated # This will call check_messages behind the scenes, and the agent's status will be updated
# If you want to avoid that, you can call it with: check=False # If you want to avoid that, you can call it with: check=False
yield self.received(expiration=expiration) yield self.received(expiration=expiration)
except events.TimedOut: except events.TimedOut:
self.info(f"Passenger at: { self.pos }. Asking for journey.") self.info(f"Still no response. Waiting at: { self.pos }")
self.model.broadcast( self.model.broadcast(
journey, ttl=timeout, sender=self, agent_class=Driver journey, ttl=timeout, sender=self, agent_class=Driver
) )
expiration = self.now + timeout expiration = self.now + timeout
self.info(f"Got a response! Waiting for driver")
return self.driving_home return self.driving_home
@state @state
@ -220,7 +225,7 @@ simulation = Simulation(name="RideHailing",
model=City, model=City,
seed="carsSeed", seed="carsSeed",
max_time=1000, max_time=1000,
model_params=dict(n_passengers=2)) parameters=dict(n_passengers=2))
if __name__ == "__main__": if __name__ == "__main__":
easy(simulation) easy(simulation)

@ -1,7 +1,7 @@
from soil import Simulation from soil import Simulation
from social_wealth import MoneyEnv, graph_generator from social_wealth import MoneyEnv, graph_generator
sim = Simulation(name="mesa_sim", dump=False, max_steps=10, interval=2, model=MoneyEnv, model_params=dict(generator=graph_generator, N=10, width=50, height=50)) sim = Simulation(name="mesa_sim", dump=False, max_steps=10, interval=2, model=MoneyEnv, parameters=dict(generator=graph_generator, N=10, width=50, height=50))
if __name__ == "__main__": if __name__ == "__main__":
sim.run() sim.run()

@ -63,7 +63,7 @@ chart = ChartModule(
[{"Label": "Gini", "Color": "Black"}], data_collector_name="datacollector" [{"Label": "Gini", "Color": "Black"}], data_collector_name="datacollector"
) )
model_params = { parameters = {
"N": Slider( "N": Slider(
"N", "N",
5, 5,
@ -98,12 +98,12 @@ model_params = {
canvas_element = CanvasGrid( canvas_element = CanvasGrid(
gridPortrayal, model_params["width"].value, model_params["height"].value, 500, 500 gridPortrayal, parameters["width"].value, parameters["height"].value, 500, 500
) )
server = ModularServer( server = ModularServer(
MoneyEnv, [grid, chart, canvas_element], "Money Model", model_params MoneyEnv, [grid, chart, canvas_element], "Money Model", parameters
) )
server.port = 8521 server.port = 8521

@ -116,7 +116,7 @@ for [r1, r2] in product([0, 0.5, 1.0], repeat=2):
Simulation( Simulation(
name='newspread_sim', name='newspread_sim',
model=NewsSpread, model=NewsSpread,
model_params=dict( parameters=dict(
ratio_dumb=r1, ratio_dumb=r1,
ratio_herd=r2, ratio_herd=r2,
ratio_wise=1-r1-r2, ratio_wise=1-r1-r2,
@ -124,7 +124,7 @@ for [r1, r2] in product([0, 0.5, 1.0], repeat=2):
network_params=netparams, network_params=netparams,
prob_neighbor_spread=0, prob_neighbor_spread=0,
), ),
num_trials=5, iterations=5,
max_steps=300, max_steps=300,
dump=False, dump=False,
).run() ).run()

@ -38,7 +38,7 @@ simulation = Simulation(
name="Programmatic", name="Programmatic",
model=ProgrammaticEnv, model=ProgrammaticEnv,
seed='Program', seed='Program',
num_trials=1, iterations=1,
max_time=100, max_time=100,
dump=False, dump=False,
) )

@ -178,10 +178,10 @@ class Police(FSM):
sim = Simulation( sim = Simulation(
model=CityPubs, model=CityPubs,
name="pubcrawl", name="pubcrawl",
num_trials=3, iterations=3,
max_steps=10, max_steps=10,
dump=False, dump=False,
model_params=dict( parameters=dict(
network_generator=nx.empty_graph, network_generator=nx.empty_graph,
network_params={"n": 30}, network_params={"n": 30},
model=CityPubs, model=CityPubs,

@ -147,7 +147,7 @@ class RandomAccident(BaseAgent):
self.debug("Rabbits alive: {}".format(rabbits_alive)) self.debug("Rabbits alive: {}".format(rabbits_alive))
sim = Simulation(model=RabbitsImprovedEnv, max_time=100, seed="MySeed", num_trials=1) sim = Simulation(model=RabbitsImprovedEnv, max_time=100, seed="MySeed", iterations=1)
if __name__ == "__main__": if __name__ == "__main__":
sim.run() sim.run()

@ -155,7 +155,7 @@ class RandomAccident(BaseAgent):
sim = Simulation(model=RabbitEnv, max_time=100, seed="MySeed", num_trials=1) sim = Simulation(model=RabbitEnv, max_time=100, seed="MySeed", iterations=1)
if __name__ == "__main__": if __name__ == "__main__":
sim.run() sim.run()

@ -38,7 +38,7 @@ class RandomEnv(Environment):
s = Simulation( s = Simulation(
name="Programmatic", name="Programmatic",
model=RandomEnv, model=RandomEnv,
num_trials=1, iterations=1,
max_time=100, max_time=100,
dump=False, dump=False,
) )

@ -2,6 +2,7 @@ import networkx as nx
from soil.agents import Geo, NetworkAgent, FSM, custom, state, default_state from soil.agents import Geo, NetworkAgent, FSM, custom, state, default_state
from soil import Environment, Simulation from soil import Environment, Simulation
from soil.parameters import * from soil.parameters import *
from soil.utils import int_seed
class TerroristEnvironment(Environment): class TerroristEnvironment(Environment):
@ -38,9 +39,8 @@ class TerroristEnvironment(Environment):
HavenModel HavenModel
], [self.ratio_civil, self.ratio_leader, self.ratio_training, self.ratio_haven]) ], [self.ratio_civil, self.ratio_leader, self.ratio_training, self.ratio_haven])
@staticmethod def generator(self, *args, **kwargs):
def generator(*args, **kwargs): return nx.random_geometric_graph(*args, **kwargs, seed=int_seed(self._seed))
return nx.random_geometric_graph(*args, **kwargs)
class TerroristSpreadModel(FSM, Geo): class TerroristSpreadModel(FSM, Geo):
""" """
@ -137,7 +137,7 @@ class TerroristSpreadModel(FSM, Geo):
def ego_search(self, steps=1, center=False, agent=None, **kwargs): def ego_search(self, steps=1, center=False, agent=None, **kwargs):
"""Get a list of nodes in the ego network of *node* of radius *steps*""" """Get a list of nodes in the ego network of *node* of radius *steps*"""
node = agent.node_id node = agent.node_id if agent else self.node_id
G = self.subgraph(**kwargs) G = self.subgraph(**kwargs)
return nx.ego_graph(G, node, center=center, radius=steps).nodes() return nx.ego_graph(G, node, center=center, radius=steps).nodes()
@ -279,26 +279,26 @@ class TerroristNetworkModel(TerroristSpreadModel):
) )
) )
neighbours = set( neighbours = set(
agent.id agent.unique_id
for agent in self.get_neighbors(agent_class=TerroristNetworkModel) for agent in self.get_neighbors(agent_class=TerroristNetworkModel)
) )
search = (close_ups | step_neighbours) - neighbours search = (close_ups | step_neighbours) - neighbours
for agent in self.get_agents(search): for agent in self.get_agents(search):
social_distance = 1 / self.shortest_path_length(agent.id) social_distance = 1 / self.shortest_path_length(agent.unique_id)
spatial_proximity = 1 - self.get_distance(agent.id) spatial_proximity = 1 - self.get_distance(agent.unique_id)
prob_new_interaction = ( prob_new_interaction = (
self.weight_social_distance * social_distance self.weight_social_distance * social_distance
+ self.weight_link_distance * spatial_proximity + self.weight_link_distance * spatial_proximity
) )
if ( if (
agent["id"] == agent.civilian.id agent.state_id == "civilian"
and self.random.random() < prob_new_interaction and self.random.random() < prob_new_interaction
): ):
self.add_edge(agent) self.add_edge(agent)
break break
def get_distance(self, target): def get_distance(self, target):
source_x, source_y = nx.get_node_attributes(self.G, "pos")[self.id] source_x, source_y = nx.get_node_attributes(self.G, "pos")[self.unique_id]
target_x, target_y = nx.get_node_attributes(self.G, "pos")[target] target_x, target_y = nx.get_node_attributes(self.G, "pos")[target]
dx = abs(source_x - target_x) dx = abs(source_x - target_x)
dy = abs(source_y - target_y) dy = abs(source_y - target_y)
@ -306,16 +306,17 @@ class TerroristNetworkModel(TerroristSpreadModel):
def shortest_path_length(self, target): def shortest_path_length(self, target):
try: try:
return nx.shortest_path_length(self.G, self.id, target) return nx.shortest_path_length(self.G, self.unique_id, target)
except nx.NetworkXNoPath: except nx.NetworkXNoPath:
return float("inf") return float("inf")
sim = Simulation( sim = Simulation(
model=TerroristEnvironment, model=TerroristEnvironment,
num_trials=1, iterations=1,
name="TerroristNetworkModel_sim", name="TerroristNetworkModel_sim",
max_steps=150, max_steps=150,
seed="default2",
skip_test=False, skip_test=False,
dump=False, dump=False,
) )

File diff suppressed because one or more lines are too long

@ -9,4 +9,5 @@ Mesa>=1.2
pydantic>=1.9 pydantic>=1.9
sqlalchemy>=1.4 sqlalchemy>=1.4
typing-extensions>=4.4 typing-extensions>=4.4
annotated-types>=0.4 annotated-types>=0.4
tqdm>=4.64

@ -1 +1 @@
0.30.0rc4 0.1.0rc1

@ -16,6 +16,7 @@ except NameError:
basestring = str basestring = str
from pathlib import Path from pathlib import Path
from .analysis import *
from .agents import * from .agents import *
from . import agents from . import agents
from .simulation import * from .simulation import *
@ -87,7 +88,7 @@ def main(
"--graph", "--graph",
"-g", "-g",
action="store_true", action="store_true",
help="Dump each trial's network topology as a GEXF graph. Defaults to false.", help="Dump each iteration's network topology as a GEXF graph. Defaults to false.",
) )
parser.add_argument( parser.add_argument(
"--csv", "--csv",
@ -116,11 +117,23 @@ def main(
help="Export environment and/or simulations using this exporter", help="Export environment and/or simulations using this exporter",
) )
parser.add_argument( parser.add_argument(
"--until", "--max_time",
default="", default="-1",
help="Set maximum time for the simulation to run. ", help="Set maximum time for the simulation to run. ",
) )
parser.add_argument(
"--max_steps",
default="-1",
help="Set maximum number of steps for the simulation to run.",
)
parser.add_argument(
"--iterations",
default="",
help="Set maximum number of iterations (runs) for the simulation.",
)
parser.add_argument( parser.add_argument(
"--seed", "--seed",
default=None, default=None,
@ -147,7 +160,8 @@ def main(
) )
args = parser.parse_args() args = parser.parse_args()
logger.setLevel(getattr(logging, (args.level or "INFO").upper())) level = getattr(logging, (args.level or "INFO").upper())
logger.setLevel(level)
if args.version: if args.version:
return return
@ -185,11 +199,14 @@ def main(
debug=debug, debug=debug,
exporters=exporters, exporters=exporters,
num_processes=args.num_processes, num_processes=args.num_processes,
level=level,
outdir=output, outdir=output,
exporter_params=exp_params, exporter_params=exp_params,
**kwargs) **kwargs)
if args.seed is not None: if args.seed is not None:
opts["seed"] = args.seed opts["seed"] = args.seed
if args.iterations:
opts["iterations"] =int(args.iterations)
if sim: if sim:
logger.info("Loading simulation instance") logger.info("Loading simulation instance")
@ -218,7 +235,7 @@ def main(
k, v = s.split("=", 1)[:2] k, v = s.split("=", 1)[:2]
v = eval(v) v = eval(v)
tail, *head = k.rsplit(".", 1)[::-1] tail, *head = k.rsplit(".", 1)[::-1]
target = sim.model_params target = sim.parameters
if head: if head:
for part in head[0].split("."): for part in head[0].split("."):
try: try:
@ -233,7 +250,9 @@ def main(
if args.only_convert: if args.only_convert:
print(sim.to_yaml()) print(sim.to_yaml())
continue continue
res.append(sim.run(until=args.until)) max_time = float(args.max_time) if args.max_time != "-1" else None
max_steps = float(args.max_steps) if args.max_steps != "-1" else None
res.append(sim.run(max_time=max_time, max_steps=max_steps))
except Exception as ex: except Exception as ex:
if args.pdb: if args.pdb:

@ -6,9 +6,9 @@ from . import NetworkAgent
class Geo(NetworkAgent): class Geo(NetworkAgent):
"""In this type of network, nodes have a "pos" attribute.""" """In this type of network, nodes have a "pos" attribute."""
def geo_search(self, radius, agent=None, center=False, **kwargs): def geo_search(self, radius, center=False, **kwargs):
"""Get a list of nodes whose coordinates are closer than *radius* to *node*.""" """Get a list of nodes whose coordinates are closer than *radius* to *node*."""
node = agent.node node = self.node_id
G = self.subgraph(**kwargs) G = self.subgraph(**kwargs)

@ -220,7 +220,7 @@ class BaseAgent(MesaAgent, MutableMapping, metaclass=MetaAgent):
def _check_alive(self): def _check_alive(self):
if not self.alive: if not self.alive:
raise time.DeadAgent(self.unique_id) raise time.DeadAgent(self.unique_id)
def log(self, *message, level=logging.INFO, **kwargs): def log(self, *message, level=logging.INFO, **kwargs):
if not self.logger.isEnabledFor(level): if not self.logger.isEnabledFor(level):
return return
@ -669,4 +669,4 @@ except ImportError:
def custom(cls, **kwargs): def custom(cls, **kwargs):
"""Create a new class from a template class and keyword arguments""" """Create a new class from a template class and keyword arguments"""
return type(cls.__name__, (cls,), kwargs) return type(cls.__name__, (cls,), kwargs)

@ -5,7 +5,7 @@ from functools import partial, wraps
import inspect import inspect
def state(name=None): def state(name=None, default=False):
def decorator(func, name=None): def decorator(func, name=None):
""" """
A state function should return either a state id, or a tuple (state_id, when) A state function should return either a state id, or a tuple (state_id, when)
@ -40,7 +40,7 @@ def state(name=None):
self._last_except = None self._last_except = None
func.id = name or func.__name__ func.id = name or func.__name__
func.is_default = False func.is_default = default
return func return func
if callable(name): if callable(name):
@ -101,6 +101,10 @@ class FSM(BaseAgent, metaclass=MetaFSM):
if init: if init:
self.init() self.init()
@classmethod
def states(cls):
return list(cls._states.keys())
def step(self): def step(self):
self.debug(f"Agent {self.unique_id} @ state {self.state_id}") self.debug(f"Agent {self.unique_id} @ state {self.state_id}")

@ -40,14 +40,11 @@ class NetworkAgent(BaseAgent):
def iter_agents(self, unique_id=None, *, limit_neighbors=False, **kwargs): def iter_agents(self, unique_id=None, *, limit_neighbors=False, **kwargs):
unique_ids = None unique_ids = None
if isinstance(unique_id, list): if unique_ids is not None:
unique_ids = set(unique_id) try:
elif unique_id is not None: unique_ids = set(unique_id)
unique_ids = set( except TypeError:
[ unique_ids = set([unique_id])
unique_id,
]
)
if limit_neighbors: if limit_neighbors:
neighbor_ids = set() neighbor_ids = set()

@ -0,0 +1,49 @@
import os
import sqlalchemy
import pandas as pd
from collections import namedtuple
def plot(env, agent_df=None, model_df=None, steps=False, ignore=["agent_count", ]):
"""Plot the model dataframe and agent dataframe together."""
if agent_df is None:
agent_df = env.agent_df()
if model_df is None:
model_df = env.model_df()
ignore = list(ignore)
if not steps:
ignore.append("step")
else:
ignore.append("time")
ax = model_df.drop(ignore, axis='columns').plot();
if not agent_df.empty:
agent_df.unstack().apply(lambda x: x.value_counts(),
axis=1).fillna(0).plot(ax=ax, secondary_y=True);
Results = namedtuple("Results", ["config", "parameters", "env", "agents"])
#TODO implement reading from CSV and SQLITE
def read_sql(fpath=None, name=None, include_agents=False):
if not (fpath is None) ^ (name is None):
raise ValueError("Specify either a path or a simulation name")
if name:
fpath = os.path.join("soil_output", name, f"{name}.sqlite")
fpath = os.path.abspath(fpath)
# TODO: improve url parsing. This is a hacky way to check we weren't given a URL
if "://" not in fpath:
fpath = f"sqlite:///{fpath}"
engine = sqlalchemy.create_engine(fpath)
with engine.connect() as conn:
env = pd.read_sql_table("env", con=conn,
index_col="step").reset_index().set_index([
"simulation_id", "params_id",
"iteration_id", "step"
])
agents = pd.read_sql_table("agents", con=conn, index_col=["simulation_id", "params_id", "iteration_id", "step", "agent_id"])
config = pd.read_sql_table("configuration", con=conn, index_col="simulation_id")
parameters = pd.read_sql_table("parameters", con=conn, index_col=["iteration_id", "params_id", "simulation_id"])
try:
parameters = parameters.pivot(columns="key", values="value")
except Exception as e:
print(f"warning: coult not pivot parameters: {e}")
return Results(config, parameters, env, agents)

@ -8,8 +8,10 @@ class SoilCollector(MDC):
tables = tables or {} tables = tables or {}
if 'agent_count' not in model_reporters: if 'agent_count' not in model_reporters:
model_reporters['agent_count'] = lambda m: m.schedule.get_agent_count() model_reporters['agent_count'] = lambda m: m.schedule.get_agent_count()
if 'state_id' not in agent_reporters: if 'time' not in model_reporters:
agent_reporters['agent_id'] = lambda agent: getattr(agent, 'state_id', None) model_reporters['time'] = lambda m: m.now
# if 'state_id' not in agent_reporters:
# agent_reporters['state_id'] = lambda agent: getattr(agent, 'state_id', None)
super().__init__(model_reporters=model_reporters, super().__init__(model_reporters=model_reporters,
agent_reporters=agent_reporters, agent_reporters=agent_reporters,

@ -34,11 +34,13 @@ class BaseEnvironment(Model):
:meth:`soil.environment.Environment.get` method. :meth:`soil.environment.Environment.get` method.
""" """
collector_class = datacollection.SoilCollector
def __new__(cls, def __new__(cls,
*args: Any, *args: Any,
seed="default", seed="default",
dir_path=None, dir_path=None,
collector_class: type = datacollection.SoilCollector, collector_class: type = None,
agent_reporters: Optional[Any] = None, agent_reporters: Optional[Any] = None,
model_reporters: Optional[Any] = None, model_reporters: Optional[Any] = None,
tables: Optional[Any] = None, tables: Optional[Any] = None,
@ -46,6 +48,7 @@ class BaseEnvironment(Model):
"""Create a new model with a default seed value""" """Create a new model with a default seed value"""
self = super().__new__(cls, *args, seed=seed, **kwargs) self = super().__new__(cls, *args, seed=seed, **kwargs)
self.dir_path = dir_path or os.getcwd() self.dir_path = dir_path or os.getcwd()
collector_class = collector_class or cls.collector_class
collector_class = serialization.deserialize(collector_class) collector_class = serialization.deserialize(collector_class)
self.datacollector = collector_class( self.datacollector = collector_class(
model_reporters=model_reporters, model_reporters=model_reporters,
@ -69,6 +72,7 @@ class BaseEnvironment(Model):
dir_path=None, dir_path=None,
schedule_class=time.TimedActivation, schedule_class=time.TimedActivation,
interval=1, interval=1,
logger = None,
agents: Optional[Dict] = None, agents: Optional[Dict] = None,
collector_class: type = datacollection.SoilCollector, collector_class: type = datacollection.SoilCollector,
agent_reporters: Optional[Any] = None, agent_reporters: Optional[Any] = None,
@ -80,10 +84,15 @@ class BaseEnvironment(Model):
super().__init__() super().__init__()
self.current_id = -1 self.current_id = -1
self.id = id self.id = id
if logger:
self.logger = logger
else:
self.logger = utils.logger.getChild(self.id)
if schedule_class is None: if schedule_class is None:
schedule_class = time.TimedActivation schedule_class = time.TimedActivation
@ -93,8 +102,6 @@ class BaseEnvironment(Model):
self.interval = interval self.interval = interval
self.schedule = schedule_class(self) self.schedule = schedule_class(self)
self.logger = utils.logger.getChild(self.id)
for (k, v) in env_params.items(): for (k, v) in env_params.items():
self[k] = v self[k] = v
@ -102,6 +109,7 @@ class BaseEnvironment(Model):
self.add_agents(**agents) self.add_agents(**agents)
if init: if init:
self.init() self.init()
self.datacollector.collect(self)
def init(self): def init(self):
pass pass
@ -115,6 +123,22 @@ class BaseEnvironment(Model):
def count_agents(self, *args, **kwargs): def count_agents(self, *args, **kwargs):
return sum(1 for i in self.agents(*args, **kwargs)) return sum(1 for i in self.agents(*args, **kwargs))
def agent_df(self, steps=False):
df = self.datacollector.get_agent_vars_dataframe()
if steps:
df.index.rename(["step", "agent_id"], inplace=True)
return df
model_df = self.datacollector.get_model_vars_dataframe()
df.index = df.index.set_levels(model_df.time, level=0).rename(["time", "agent_id"])
return df
def model_df(self, steps=False):
df = self.datacollector.get_model_vars_dataframe()
if steps:
return df
df.index.rename("step", inplace=True)
return df.reset_index().set_index("time")
@property @property
def now(self): def now(self):
@ -171,11 +195,12 @@ class BaseEnvironment(Model):
self.schedule.step() self.schedule.step()
self.datacollector.collect(self) self.datacollector.collect(self)
msg = "Model data:\n" if self.logger.isEnabledFor(logging.DEBUG):
max_width = max(len(k) for k in self.datacollector.model_vars.keys()) msg = "Model data:\n"
for (k, v) in self.datacollector.model_vars.items(): max_width = max(len(k) for k in self.datacollector.model_vars.keys())
msg += f"\t{k:<{max_width}}: {v[-1]:>6}\n" for (k, v) in self.datacollector.model_vars.items():
self.logger.info(f"--- Steps: {self.schedule.steps:^5} - Time: {self.now:^5} --- " + msg) msg += f"\t{k:<{max_width}}: {v[-1]:>6}\n"
self.logger.debug(f"--- Steps: {self.schedule.steps:^5} - Time: {self.now:^5} --- " + msg)
def add_model_reporter(self, name, func=None): def add_model_reporter(self, name, func=None):
if not func: if not func:
@ -186,9 +211,18 @@ class BaseEnvironment(Model):
if agent_type: if agent_type:
reporter = lambda a: getattr(a, name) if isinstance(a, agent_type) else None reporter = lambda a: getattr(a, name) if isinstance(a, agent_type) else None
else: else:
reporter = name reporter = lambda a: getattr(a, name, None)
self.datacollector._new_agent_reporter(name, reporter) self.datacollector._new_agent_reporter(name, reporter)
@classmethod
def run(cls, *,
iterations=1,
num_processes=1, **kwargs):
from .simulation import Simulation
return Simulation(name=cls.__name__,
model=cls, iterations=iterations,
num_processes=num_processes, **kwargs).run()
def __getitem__(self, key): def __getitem__(self, key):
try: try:
return getattr(self, key) return getattr(self, key)
@ -250,6 +284,7 @@ class NetworkEnvironment(BaseEnvironment):
self._check_agent_nodes() self._check_agent_nodes()
if init: if init:
self.init() self.init()
self.datacollector.collect(self)
def add_agent(self, agent_class, *args, node_id=None, topology=None, **kwargs): def add_agent(self, agent_class, *args, node_id=None, topology=None, **kwargs):
if node_id is None and topology is None: if node_id is None and topology is None:
@ -373,7 +408,7 @@ class EventedEnvironment(BaseEnvironment):
for agent in self.agents(**kwargs): for agent in self.agents(**kwargs):
if agent == sender: if agent == sender:
continue continue
self.logger.info(f"Telling {repr(agent)}: {msg} ttl={ttl}") self.logger.debug(f"Telling {repr(agent)}: {msg} ttl={ttl}")
try: try:
inbox = agent._inbox inbox = agent._inbox
except AttributeError: except AttributeError:

@ -8,9 +8,10 @@ from textwrap import dedent, indent
import matplotlib.pyplot as plt import matplotlib.pyplot as plt
import networkx as nx import networkx as nx
import pandas as pd
from .serialization import deserialize from .serialization import deserialize, serialize
from .utils import try_backup, open_or_reuse, logger, timer from .utils import try_backup, open_or_reuse, logger, timer
@ -68,12 +69,12 @@ class Exporter:
"""Method to call when the simulation ends""" """Method to call when the simulation ends"""
pass pass
def trial_start(self, env): def iteration_start(self, env):
"""Method to call when a trial start""" """Method to call when a iteration start"""
pass pass
def trial_end(self, env): def iteration_end(self, env, params, params_id):
"""Method to call when a trial ends""" """Method to call when a iteration ends"""
pass pass
def output(self, f, mode="w", **kwargs): def output(self, f, mode="w", **kwargs):
@ -85,27 +86,39 @@ class Exporter:
f = os.path.join(self.outdir, f) f = os.path.join(self.outdir, f)
except TypeError: except TypeError:
pass pass
return open_or_reuse(f, mode=mode, **kwargs) return open_or_reuse(f, mode=mode, backup=self.simulation.backup, **kwargs)
def get_dfs(self, env): def get_dfs(self, env, **kwargs):
yield from get_dc_dfs(env.datacollector, trial_id=env.id) yield from get_dc_dfs(env.datacollector,
simulation_id=self.simulation.id,
iteration_id=env.id,
def get_dc_dfs(dc, trial_id=None): **kwargs)
dfs = {
"env": dc.get_model_vars_dataframe(),
"agents": dc.get_agent_vars_dataframe(), def get_dc_dfs(dc, **kwargs):
} dfs = {}
dfe = dc.get_model_vars_dataframe()
dfe.index.rename("step", inplace=True)
dfs["env"] = dfe
try:
dfa = dc.get_agent_vars_dataframe()
dfa.index.rename(["step", "agent_id"], inplace=True)
dfs["agents"] = dfa
except UserWarning:
pass
for table_name in dc.tables: for table_name in dc.tables:
dfs[table_name] = dc.get_table_dataframe(table_name) dfs[table_name] = dc.get_table_dataframe(table_name)
if trial_id: for (name, df) in dfs.items():
for (name, df) in dfs.items(): for (k, v) in kwargs.items():
df["trial_id"] = trial_id df[k] = v
df.set_index(["simulation_id", "iteration_id"], append=True, inplace=True)
yield from dfs.items() yield from dfs.items()
class SQLite(Exporter): class SQLite(Exporter):
"""Writes sqlite results""" """Writes sqlite results"""
sim_started = False
def sim_start(self): def sim_start(self):
if not self.dump: if not self.dump:
@ -113,46 +126,64 @@ class SQLite(Exporter):
return return
self.dbpath = os.path.join(self.outdir, f"{self.simulation.name}.sqlite") self.dbpath = os.path.join(self.outdir, f"{self.simulation.name}.sqlite")
logger.info("Dumping results to %s", self.dbpath) logger.info("Dumping results to %s", self.dbpath)
try_backup(self.dbpath, remove=True) if self.simulation.backup:
try_backup(self.dbpath, remove=True)
def trial_end(self, env):
if self.simulation.overwrite:
if os.path.exists(self.dbpath):
os.remove(self.dbpath)
self.engine = create_engine(f"sqlite:///{self.dbpath}", echo=False)
sim_dict = {k: serialize(v)[0] for (k,v) in self.simulation.to_dict().items()}
sim_dict["simulation_id"] = self.simulation.id
df = pd.DataFrame([sim_dict])
df.to_sql("configuration", con=self.engine, if_exists="append")
def iteration_end(self, env, params, params_id, *args, **kwargs):
if not self.dump: if not self.dump:
logger.info("Running in NO DUMP mode, the database will NOT be created") logger.info("Running in NO DUMP mode. Results will NOT be saved to a DB.")
return return
with timer( with timer(
"Dumping simulation {} trial {}".format(self.simulation.name, env.id) "Dumping simulation {} iteration {}".format(self.simulation.name, env.id)
): ):
engine = create_engine(f"sqlite:///{self.dbpath}", echo=False) pd.DataFrame([{"simulation_id": self.simulation.id,
"params_id": params_id,
"iteration_id": env.id,
"key": k,
"value": serialize(v)[0]} for (k,v) in params.items()]).to_sql("parameters", con=self.engine, if_exists="append")
for (t, df) in self.get_dfs(env): for (t, df) in self.get_dfs(env, params_id=params_id):
df.to_sql(t, con=engine, if_exists="append") df.to_sql(t, con=self.engine, if_exists="append")
class csv(Exporter): class csv(Exporter):
"""Export the state of each environment (and its agents) a CSV file for the simulation"""
"""Export the state of each environment (and its agents) in a separate CSV file""" def sim_start(self):
super().sim_start()
def trial_end(self, env): def iteration_end(self, env, params, params_id, *args, **kwargs):
with timer( with timer(
"[CSV] Dumping simulation {} trial {} @ dir {}".format( "[CSV] Dumping simulation {} iteration {} @ dir {}".format(
self.simulation.name, env.id, self.outdir self.simulation.name, env.id, self.outdir
) )
): ):
for (df_name, df) in self.get_dfs(env): for (df_name, df) in self.get_dfs(env, params_id=params_id):
with self.output("{}.{}.csv".format(env.id, df_name)) as f: with self.output("{}.{}.csv".format(env.id, df_name), mode="a") as f:
df.to_csv(f) df.to_csv(f)
# TODO: reimplement GEXF exporting without history # TODO: reimplement GEXF exporting without history
class gexf(Exporter): class gexf(Exporter):
def trial_end(self, env): def iteration_end(self, env, *args, **kwargs):
if not self.dump: if not self.dump:
logger.info("Not dumping GEXF (NO_DUMP mode)") logger.info("Not dumping GEXF (NO_DUMP mode)")
return return
with timer( with timer(
"[GEXF] Dumping simulation {} trial {}".format(self.simulation.name, env.id) "[GEXF] Dumping simulation {} iteration {}".format(self.simulation.name, env.id)
): ):
with self.output("{}.gexf".format(env.id), mode="wb") as f: with self.output("{}.gexf".format(env.id), mode="wb") as f:
network.dump_gexf(env.history_to_graph(), f) network.dump_gexf(env.history_to_graph(), f)
@ -164,13 +195,13 @@ class dummy(Exporter):
with self.output("dummy", "w") as f: with self.output("dummy", "w") as f:
f.write("simulation started @ {}\n".format(current_time())) f.write("simulation started @ {}\n".format(current_time()))
def trial_start(self, env): def iteration_start(self, env):
with self.output("dummy", "w") as f: with self.output("dummy", "w") as f:
f.write("trial started@ {}\n".format(current_time())) f.write("iteration started@ {}\n".format(current_time()))
def trial_end(self, env): def iteration_end(self, env, *args, **kwargs):
with self.output("dummy", "w") as f: with self.output("dummy", "w") as f:
f.write("trial ended@ {}\n".format(current_time())) f.write("iteration ended@ {}\n".format(current_time()))
def sim_end(self): def sim_end(self):
with self.output("dummy", "a") as f: with self.output("dummy", "a") as f:
@ -178,7 +209,7 @@ class dummy(Exporter):
class graphdrawing(Exporter): class graphdrawing(Exporter):
def trial_end(self, env): def iteration_end(self, env, *args, **kwargs):
# Outside effects # Outside effects
f = plt.figure() f = plt.figure()
nx.draw( nx.draw(
@ -193,9 +224,9 @@ class graphdrawing(Exporter):
class summary(Exporter): class summary(Exporter):
"""Print a summary of each trial to sys.stdout""" """Print a summary of each iteration to sys.stdout"""
def trial_end(self, env): def iteration_end(self, env, *args, **kwargs):
msg = "" msg = ""
for (t, df) in self.get_dfs(env): for (t, df) in self.get_dfs(env):
if not len(df): if not len(df):
@ -227,7 +258,7 @@ class YAML(Exporter):
if not self.dump: if not self.dump:
logger.debug("NOT dumping results") logger.debug("NOT dumping results")
return return
with self.output(self.simulation.name + ".dumped.yml") as f: with self.output(self.simulation.id + ".dumped.yml") as f:
logger.info(f"Dumping simulation configuration to {self.outdir}") logger.info(f"Dumping simulation configuration to {self.outdir}")
f.write(self.simulation.to_yaml()) f.write(self.simulation.to_yaml())
@ -238,19 +269,14 @@ class default(Exporter):
exporter_cls = exporter_cls or [YAML, SQLite] exporter_cls = exporter_cls or [YAML, SQLite]
self.inner = [cls(*args, **kwargs) for cls in exporter_cls] self.inner = [cls(*args, **kwargs) for cls in exporter_cls]
def sim_start(self): def sim_start(self, *args, **kwargs):
for exporter in self.inner: for exporter in self.inner:
exporter.sim_start() exporter.sim_start(*args, **kwargs)
def sim_end(self): def sim_end(self, *args, **kwargs):
for exporter in self.inner: for exporter in self.inner:
exporter.sim_end() exporter.sim_end(*args, **kwargs)
def trial_start(self, env):
for exporter in self.inner:
exporter.trial_start(env)
def trial_end(self, env): def iteration_end(self, *args, **kwargs):
for exporter in self.inner: for exporter in self.inner:
exporter.trial_end(env) exporter.iteration_end(*args, **kwargs)

@ -140,7 +140,7 @@ def get_module(modname):
module = importlib.import_module(modname) module = importlib.import_module(modname)
KNOWN_MODULES[modname] = module KNOWN_MODULES[modname] = module
return KNOWN_MODULES[modname] return KNOWN_MODULES[modname]
def name(value, known_modules=KNOWN_MODULES): def name(value, known_modules=KNOWN_MODULES):
"""Return a name that can be imported, to serialize/deserialize an object""" """Return a name that can be imported, to serialize/deserialize an object"""
@ -181,7 +181,7 @@ def serialize_dict(d, known_modules=KNOWN_MODULES):
d = dict(d) d = dict(d)
except (ValueError, TypeError) as ex: except (ValueError, TypeError) as ex:
return serialize(d)[0] return serialize(d)[0]
for (k, v) in d.items(): for (k, v) in reversed(list(d.items())):
if isinstance(v, dict): if isinstance(v, dict):
d[k] = serialize_dict(v, known_modules=known_modules) d[k] = serialize_dict(v, known_modules=known_modules)
elif isinstance(v, list): elif isinstance(v, list):

@ -1,23 +1,26 @@
import os import os
from time import time as current_time, strftime from time import time as current_time, strftime
import importlib
import sys import sys
import yaml import yaml
import traceback import hashlib
import inspect import inspect
import logging import logging
import networkx as nx import networkx as nx
from tqdm.auto import tqdm
from textwrap import dedent from textwrap import dedent
from dataclasses import dataclass, field, asdict, replace from dataclasses import dataclass, field, asdict, replace
from typing import Any, Dict, Union, Optional, List from typing import Any, Dict, Union, Optional, List
from networkx.readwrite import json_graph
from functools import partial from functools import partial
from contextlib import contextmanager from contextlib import contextmanager
import pickle from itertools import product
import json
from . import serialization, exporters, utils, basestring, agents from . import serialization, exporters, utils, basestring, agents
from .environment import Environment from .environment import Environment
@ -41,11 +44,13 @@ def do_not_run():
def _iter_queued(): def _iter_queued():
while _QUEUED: while _QUEUED:
(cls, args, kwargs) = _QUEUED.pop(0) (cls, params) = _QUEUED.pop(0)
yield replace(cls, **kwargs) yield replace(cls, parameters=params)
# TODO: change documentation for simulation # TODO: change documentation for simulation
# TODO: rename iterations to iterations
# TODO: make parameters a dict of iterable/any
@dataclass @dataclass
class Simulation: class Simulation:
""" """
@ -57,18 +62,21 @@ class Simulation:
description: A description of the simulation. description: A description of the simulation.
group: The group that the simulation belongs to. group: The group that the simulation belongs to.
model: The model to use for the simulation. This can be a string or a class. model: The model to use for the simulation. This can be a string or a class.
model_params: The parameters to pass to the model. parameters: The parameters to pass to the model.
matrix: A matrix of values for each parameter.
seed: The seed to use for the simulation. seed: The seed to use for the simulation.
dir_path: The directory path to use for the simulation. dir_path: The directory path to use for the simulation.
max_time: The maximum time to run the simulation. max_time: The maximum time to run the simulation.
max_steps: The maximum number of steps to run the simulation. max_steps: The maximum number of steps to run the simulation.
interval: The interval to use for the simulation. interval: The interval to use for the simulation.
num_trials: The number of trials (times) to run the simulation. iterations: The number of iterations (times) to run the simulation.
num_processes: The number of processes to use for the simulation. If greater than one, simulations will be performed in parallel. This may make debugging and error handling difficult. num_processes: The number of processes to use for the simulation. If greater than one, simulations will be performed in parallel. This may make debugging and error handling difficult.
tables: The tables to use in the simulation datacollector tables: The tables to use in the simulation datacollector
agent_reporters: The agent reporters to use in the datacollector agent_reporters: The agent reporters to use in the datacollector
model_reporters: The model reporters to use in the datacollector model_reporters: The model reporters to use in the datacollector
dry_run: Whether or not to run the simulation. If True, the simulation will not be run. dry_run: Whether or not to run the simulation. If True, the simulation will not be run.
backup: Whether or not to backup the simulation. If True, the simulation files will be backed up to a different directory.
overwrite: Whether or not to replace existing simulation data.
source_file: Python file to use to find additional classes. source_file: Python file to use to find additional classes.
""" """
@ -77,24 +85,28 @@ class Simulation:
name: Optional[str] = None name: Optional[str] = None
description: Optional[str] = "" description: Optional[str] = ""
group: str = None group: str = None
backup: bool = False
overwrite: bool = False
dry_run: bool = False
dump: bool = False
model: Union[str, type] = "soil.Environment" model: Union[str, type] = "soil.Environment"
model_params: dict = field(default_factory=dict) parameters: dict = field(default_factory=dict)
seed: str = field(default_factory=lambda: current_time()) matrix: dict = field(default_factory=dict)
seed: str = "default"
dir_path: str = field(default_factory=lambda: os.getcwd()) dir_path: str = field(default_factory=lambda: os.getcwd())
max_time: float = float("inf") max_time: float = None
max_steps: int = -1 max_steps: int = None
interval: int = 1 interval: int = 1
num_trials: int = 1 iterations: int = 1
num_processes: Optional[int] = 1 num_processes: Optional[int] = 1
exporters: Optional[List[str]] = field(default_factory=lambda: [exporters.default]) exporters: Optional[List[str]] = field(default_factory=lambda: [exporters.default])
model_reporters: Optional[Dict[str, Any]] = field(default_factory=dict) model_reporters: Optional[Dict[str, Any]] = field(default_factory=dict)
agent_reporters: Optional[Dict[str, Any]] = field(default_factory=dict) agent_reporters: Optional[Dict[str, Any]] = field(default_factory=dict)
tables: Optional[Dict[str, Any]] = field(default_factory=dict) tables: Optional[Dict[str, Any]] = field(default_factory=dict)
outdir: Optional[str] = None outdir: str = field(default_factory=lambda: os.path.join(os.getcwd(), "soil_output"))
# outdir: Optional[str] = None
exporter_params: Optional[Dict[str, Any]] = field(default_factory=dict) exporter_params: Optional[Dict[str, Any]] = field(default_factory=dict)
dry_run: bool = False level: int = logging.INFO
dump: bool = False
extra: Dict[str, Any] = field(default_factory=dict)
skip_test: Optional[bool] = False skip_test: Optional[bool] = False
debug: Optional[bool] = False debug: Optional[bool] = False
@ -103,14 +115,39 @@ class Simulation:
if isinstance(self.model, str): if isinstance(self.model, str):
self.name = self.model self.name = self.model
else: else:
self.name = self.model.__class__.__name__ self.name = self.model.__name__
self.logger = logger.getChild(self.name)
self.logger.setLevel(self.level)
if self.source_file:
source_file = self.source_file
if not os.path.isabs(source_file):
source_file = os.path.abspath(os.path.join(self.dir_path, source_file))
serialization.add_source_file(source_file)
self.source_file = source_file
if isinstance(self.model, str):
self.model = serialization.deserialize(self.model)
def deserialize_reporters(reporters):
for (k, v) in reporters.items():
if isinstance(v, str) and v.startswith("py:"):
reporters[k] = serialization.deserialize(v.split(":", 1)[1])
return reporters
def run_simulation(self, *args, **kwargs): self.agent_reporters = deserialize_reporters(self.agent_reporters)
return self.run(*args, **kwargs) self.model_reporters = deserialize_reporters(self.model_reporters)
self.tables = deserialize_reporters(self.tables)
if self.source_file:
serialization.remove_source_file(self.source_file)
self.id = f"{self.name}_{current_time()}"
def run(self, *args, **kwargs): def run(self, **kwargs):
"""Run the simulation and return the list of resulting environments""" """Run the simulation and return the list of resulting environments"""
logger.info( if kwargs:
return replace(self, **kwargs).run()
self.logger.debug(
dedent( dedent(
""" """
Simulation: Simulation:
@ -119,179 +156,156 @@ class Simulation:
) )
+ self.to_yaml() + self.to_yaml()
) )
param_combinations = self._collect_params(**kwargs)
if _AVOID_RUNNING: if _AVOID_RUNNING:
_QUEUED.append((self, args, kwargs)) _QUEUED.extend((self, param) for param in param_combinations)
return [] return []
return list(self._run_gen(*args, **kwargs))
def _run_gen( self.logger.debug("Using exporters: %s", self.exporters or [])
self,
num_processes=1,
dry_run=None,
dump=None,
exporters=None,
outdir=None,
exporter_params={},
log_level=None,
**kwargs,
):
"""Run the simulation and yield the resulting environments."""
if log_level:
logger.setLevel(log_level)
outdir = outdir or self.outdir
logger.info("Using exporters: %s", exporters or [])
logger.info("Output directory: %s", outdir)
if dry_run is None:
dry_run = self.dry_run
if dump is None:
dump = self.dump
if exporters is None:
exporters = self.exporters
if not exporter_params:
exporter_params = self.exporter_params
exporters = serialization.deserialize_all( exporters = serialization.deserialize_all(
exporters, self.exporters,
simulation=self, simulation=self,
known_modules=[ known_modules=[
"soil.exporters", "soil.exporters",
], ],
dump=dump and not dry_run, dump=self.dump and not self.dry_run,
outdir=outdir, outdir=self.outdir,
**exporter_params, **self.exporter_params,
) )
if self.source_file: results = []
source_file = self.source_file for exporter in exporters:
if not os.path.isabs(source_file): exporter.sim_start()
source_file = os.path.abspath(os.path.join(self.dir_path, source_file))
serialization.add_source_file(source_file) for params in tqdm(param_combinations, desc=self.name, unit="configuration"):
try: for (k, v) in params.items():
tqdm.write(f"{k} = {v}")
with utils.timer("simulation {}".format(self.name)): sha = hashlib.sha256()
sha.update(repr(sorted(params.items())).encode())
params_id = sha.hexdigest()[:7]
for env in self._run_iters_for_params(params):
for exporter in exporters: for exporter in exporters:
exporter.sim_start() exporter.iteration_end(env, params, params_id)
results.append(env)
if dry_run:
def func(*args, **kwargs):
return None
else:
func = self.run_trial
for env in utils.run_parallel(
func=self.run_trial,
iterable=range(int(self.num_trials)),
num_processes=num_processes,
log_level=log_level,
**kwargs,
):
if env is None and dry_run:
continue
for exporter in exporters: for exporter in exporters:
exporter.trial_end(env) exporter.sim_end()
yield env return results
for exporter in exporters: def _collect_params(self):
exporter.sim_end()
parameters = []
if self.parameters:
parameters.append(self.parameters)
if self.matrix:
assert isinstance(self.matrix, dict)
for values in product(*(self.matrix.values())):
parameters.append(dict(zip(self.matrix.keys(), values)))
if not parameters:
parameters = [{}]
if self.dump:
self.logger.info("Output directory: %s", self.outdir)
return parameters
def _run_iters_for_params(
self,
params
):
"""Run the simulation and yield the resulting environments."""
try:
if self.source_file:
serialization.add_source_file(self.source_file)
with utils.timer(f"running for config {params}"):
if self.dry_run:
def func(*args, **kwargs):
return None
else:
func = self._run_model
for env in tqdm(utils.run_parallel(
func=func,
iterable=range(self.iterations),
**params,
), total=self.iterations, leave=False):
if env is None and self.dry_run:
continue
yield env
finally: finally:
pass if self.source_file:
# TODO: reintroduce serialization.remove_source_file(self.source_file)
# if self.source_file:
# serialization.remove_source_file(self.source_file)
def get_env(self, trial_id=0, model_params=None, **kwargs): def _get_env(self, iteration_id, params):
"""Create an environment for a trial of the simulation""" """Create an environment for a iteration of the simulation"""
def deserialize_reporters(reporters): iteration_id = str(iteration_id)
for (k, v) in reporters.items():
if isinstance(v, str) and v.startswith("py:"): agent_reporters = self.agent_reporters
reporters[k] = serialization.deserialize(v.split(":", 1)[1]) agent_reporters.update(params.pop("agent_reporters", {}))
return reporters model_reporters = self.model_reporters
model_reporters.update(params.pop("model_reporters", {}))
params = self.model_params.copy() return self.model(
if model_params: id=iteration_id,
params.update(model_params) seed=f"{self.seed}_iteration_{iteration_id}",
params.update(kwargs)
agent_reporters = self.agent_reporters.copy()
agent_reporters.update(deserialize_reporters(params.pop("agent_reporters", {})))
model_reporters = self.model_reporters.copy()
model_reporters.update(deserialize_reporters(params.pop("model_reporters", {})))
tables = self.tables.copy()
tables.update(deserialize_reporters(params.pop("tables", {})))
env = serialization.deserialize(self.model)
return env(
id=f"{self.name}_trial_{trial_id}",
seed=f"{self.seed}_trial_{trial_id}",
dir_path=self.dir_path, dir_path=self.dir_path,
interval=self.interval, interval=self.interval,
logger=self.logger.getChild(iteration_id),
agent_reporters=agent_reporters, agent_reporters=agent_reporters,
model_reporters=model_reporters, model_reporters=model_reporters,
tables=tables, tables=self.tables,
**params, **params,
) )
def run_trial( def _run_model(self, iteration_id, **params):
self, trial_id=None, until=None, log_file=False, log_level=logging.INFO, **opts
):
""" """
Run a single trial of the simulation Run a single iteration of the simulation
""" """
if log_level: # Set-up iteration environment and graph
logger.setLevel(log_level) model = self._get_env(iteration_id, params)
model = self.get_env(trial_id, **opts) with utils.timer("Simulation {} iteration {}".format(self.name, iteration_id)):
trial_id = trial_id if trial_id is not None else current_time()
with utils.timer("Simulation {} trial {}".format(self.name, trial_id)): max_time = self.max_time
return self.run_model( max_steps = self.max_steps
model=model, trial_id=trial_id, until=until, log_level=log_level
) if (max_time is not None) and (max_steps is not None):
is_done = lambda model: (not model.running) or (model.schedule.time >= max_time) or (model.schedule.steps >= max_steps)
def run_model(self, model, until=None, **opts): elif max_time is not None:
# Set-up trial environment and graph is_done = lambda model: (not model.running) or (model.schedule.time >= max_time)
until = float(until or self.max_time or "inf") elif max_steps is not None:
is_done = lambda model: (not model.running) or (model.schedule.steps >= max_steps)
# Set up agents on nodes else:
def is_done(): is_done = lambda model: not model.running
return not model.running
if until and hasattr(model.schedule, "time"):
prev = is_done
def is_done():
return prev() or model.schedule.time >= until
if not model.schedule.agents: if not model.schedule.agents:
raise Exception("No agents in model. This is probably a bug. Make sure that the model has agents scheduled after its initialization.") raise Exception("No agents in model. This is probably a bug. Make sure that the model has agents scheduled after its initialization.")
if self.max_steps and self.max_steps > 0 and hasattr(model.schedule, "steps"): newline = "\n"
prev_steps = is_done self.logger.debug(
dedent(
def is_done(): f"""
return prev_steps() or model.schedule.steps >= self.max_steps Model stats:
Agent count: { model.schedule.get_agent_count() }):
newline = "\n" Topology size: { len(model.G) if hasattr(model, "G") else 0 }
logger.info( """
dedent( )
f"""
Model stats:
Agent count: { model.schedule.get_agent_count() }):
Topology size: { len(model.G) if hasattr(model, "G") else 0 }
"""
) )
)
if self.debug: if self.debug:
set_trace() set_trace()
while not is_done(): while not is_done(model):
utils.logger.debug( self.logger.debug(
f'Simulation time {model.schedule.time}/{until}.' f'Simulation time {model.schedule.time}/{max_time}.'
) )
model.step() model.step()
return model return model
@ -333,10 +347,9 @@ def from_config(conf_or_path):
return lst[0] return lst[0]
def iter_from_py(pyfile, module_name='custom_simulation', **kwargs): def iter_from_py(pyfile, module_name='imported_file', **kwargs):
"""Try to load every Simulation instance in a given Python file""" """Try to load every Simulation instance in a given Python file"""
import importlib import importlib
import inspect
added = False added = False
sims = [] sims = []
assert not _AVOID_RUNNING assert not _AVOID_RUNNING
@ -377,3 +390,6 @@ def run_from_file(*files, **kwargs):
for sim in iter_from_file(*files): for sim in iter_from_file(*files):
logger.info(f"Using config(s): {sim.name}") logger.info(f"Using config(s): {sim.name}")
sim.run_simulation(**kwargs) sim.run_simulation(**kwargs)
def run(env, iterations=1, num_processes=1, dump=False, name="test", **kwargs):
return Simulation(model=env, iterations=iterations, name=name, dump=dump, num_processes=num_processes, **kwargs).run()

@ -1,6 +1,6 @@
from mesa.time import BaseScheduler from mesa.time import BaseScheduler
from queue import Empty from queue import Empty
from heapq import heappush, heappop from heapq import heappush, heappop, heapreplace
import math import math
from inspect import getsource from inspect import getsource
@ -99,7 +99,8 @@ class TimedActivation(BaseScheduler):
self._shuffle = shuffle self._shuffle = shuffle
# self.step_interval = getattr(self.model, "interval", 1) # self.step_interval = getattr(self.model, "interval", 1)
self.step_interval = self.model.interval self.step_interval = self.model.interval
self.logger = logger.getChild(f"time_{ self.model }") self.logger = getattr(self.model, "logger", logger).getChild(f"time_{ self.model }")
self.next_time = self.time
def add(self, agent: MesaAgent, when=None): def add(self, agent: MesaAgent, when=None):
if when is None: if when is None:
@ -110,7 +111,7 @@ class TimedActivation(BaseScheduler):
self._schedule(agent, None, when) self._schedule(agent, None, when)
super().add(agent) super().add(agent)
def _schedule(self, agent, condition=None, when=None): def _schedule(self, agent, condition=None, when=None, replace=False):
if condition: if condition:
if not when: if not when:
when, condition = condition.schedule_next( when, condition = condition.schedule_next(
@ -125,7 +126,10 @@ class TimedActivation(BaseScheduler):
else: else:
key = (when, agent.unique_id, condition) key = (when, agent.unique_id, condition)
self._next[agent.unique_id] = key self._next[agent.unique_id] = key
heappush(self._queue, (key, agent)) if replace:
heapreplace(self._queue, (key, agent))
else:
heappush(self._queue, (key, agent))
def step(self) -> None: def step(self) -> None:
""" """
@ -144,10 +148,9 @@ class TimedActivation(BaseScheduler):
if when > self.time: if when > self.time:
break break
heappop(self._queue)
if cond: if cond:
if not cond.ready(agent, self.time): if not cond.ready(agent, self.time):
self._schedule(agent, cond) self._schedule(agent, cond, replace=True)
continue continue
try: try:
agent._last_return = cond.return_value(agent) agent._last_return = cond.return_value(agent)
@ -164,36 +167,38 @@ class TimedActivation(BaseScheduler):
returned = agent.step() returned = agent.step()
except DeadAgent: except DeadAgent:
agent.alive = False agent.alive = False
heappop(self._queue)
continue continue
# Check status for MESA agents # Check status for MESA agents
if not getattr(agent, "alive", True): if not getattr(agent, "alive", True):
heappop(self._queue)
continue continue
if returned: if returned:
next_check = returned.schedule_next( next_check = returned.schedule_next(
self.time, self.step_interval, first=True self.time, self.step_interval, first=True
) )
self._schedule(agent, when=next_check[0], condition=next_check[1]) self._schedule(agent, when=next_check[0], condition=next_check[1], replace=True)
else: else:
next_check = (self.time + self.step_interval, None) next_check = (self.time + self.step_interval, None)
self._schedule(agent) self._schedule(agent, replace=True)
self.steps += 1 self.steps += 1
if not self._queue: if not self._queue:
self.time = INFINITY
self.model.running = False self.model.running = False
return self.time self.time = INFINITY
return
next_time = self._queue[0][0][0] next_time = self._queue[0][0][0]
if next_time < self.time: if next_time < self.time:
raise Exception( raise Exception(
f"An agent has been scheduled for a time in the past, there is probably an error ({when} < {self.time})" f"An agent has been scheduled for a time in the past, there is probably an error ({when} < {self.time})"
) )
self.logger.debug(f"Updating time step: {self.time} -> {next_time}") self.logger.debug("Updating time step: %s -> %s ", self.time, next_time)
self.time = next_time self.time = next_time

@ -10,7 +10,7 @@ from multiprocessing import Pool, cpu_count
from contextlib import contextmanager from contextlib import contextmanager
logger = logging.getLogger("soil") logger = logging.getLogger("soil")
logger.setLevel(logging.INFO) logger.setLevel(logging.WARNING)
timeformat = "%H:%M:%S" timeformat = "%H:%M:%S"
@ -24,7 +24,7 @@ consoleHandler = logging.StreamHandler()
consoleHandler.setFormatter(logFormatter) consoleHandler.setFormatter(logFormatter)
logging.basicConfig( logging.basicConfig(
level=logging.DEBUG, level=logging.INFO,
handlers=[ handlers=[
consoleHandler, consoleHandler,
], ],
@ -60,7 +60,7 @@ def try_backup(path, remove=False):
if not os.path.exists(backup_dir): if not os.path.exists(backup_dir):
os.makedirs(backup_dir) os.makedirs(backup_dir)
newpath = os.path.join(backup_dir, "{}@{}".format(os.path.basename(path), stamp)) newpath = os.path.join(backup_dir, "{}@{}".format(os.path.basename(path), stamp))
if move: if remove:
move(path, newpath) move(path, newpath)
else: else:
copyfile(path, newpath) copyfile(path, newpath)
@ -126,7 +126,7 @@ def unflatten_dict(d):
def run_and_return_exceptions(func, *args, **kwargs): def run_and_return_exceptions(func, *args, **kwargs):
""" """
A wrapper for run_trial that catches exceptions and returns them. A wrapper for a function that catches exceptions and returns them.
It is meant for async simulations. It is meant for async simulations.
""" """
try: try:
@ -154,3 +154,7 @@ def run_parallel(func, iterable, num_processes=1, **kwargs):
else: else:
for i in iterable: for i in iterable:
yield func(i, **kwargs) yield func(i, **kwargs)
def int_seed(seed: str):
return int.from_bytes(seed.encode(), "little")

@ -54,8 +54,7 @@ class TestAgents(TestCase):
class MyAgent(agents.FSM): class MyAgent(agents.FSM):
run = 0 run = 0
@agents.default_state @agents.state("original", default=True)
@agents.state("original")
def root(self): def root(self):
self.run += 1 self.run += 1
return self.other return self.other
@ -65,10 +64,11 @@ class TestAgents(TestCase):
self.run += 1 self.run += 1
e = environment.Environment() e = environment.Environment()
a = MyAgent(model=e, unique_id=e.next_id()) a = e.add_agent(MyAgent)
a.step() e.step()
assert a.run == 1 assert a.run == 1
a.step() a.step()
print("DONE")
def test_broadcast(self): def test_broadcast(self):
""" """

@ -28,13 +28,16 @@ class TestConfig(TestCase):
def test_torvalds_config(self): def test_torvalds_config(self):
sim = simulation.from_config(os.path.join(ROOT, "test_config.yml")) sim = simulation.from_config(os.path.join(ROOT, "test_config.yml"))
assert sim.interval == 2 MAX_STEPS = 10
INTERVAL = 2
assert sim.interval == INTERVAL
assert sim.max_steps == MAX_STEPS
envs = sim.run() envs = sim.run()
assert len(envs) == 1 assert len(envs) == 1
env = envs[0] env = envs[0]
assert env.interval == 2 assert env.interval == 2
assert env.count_agents() == 3 assert env.count_agents() == 3
assert env.now == 20 assert env.now == INTERVAL * MAX_STEPS
def make_example_test(path, cfg): def make_example_test(path, cfg):
@ -42,10 +45,10 @@ def make_example_test(path, cfg):
root = os.getcwd() root = os.getcwd()
print(path) print(path)
s = simulation.from_config(cfg) s = simulation.from_config(cfg)
iterations = s.max_time * s.num_trials iterations = s.max_time * s.iterations
if iterations > 1000: if iterations > 1000:
s.max_time = 100 s.max_time = 100
s.num_trials = 1 s.iterations = 1
if cfg.skip_test and not FORCE_TESTS: if cfg.skip_test and not FORCE_TESTS:
self.skipTest('Example ignored.') self.skipTest('Example ignored.')
envs = s.run_simulation(dump=False) envs = s.run_simulation(dump=False)
@ -53,7 +56,7 @@ def make_example_test(path, cfg):
for env in envs: for env in envs:
assert env assert env
try: try:
n = cfg.model_params['topology']['params']['n'] n = cfg.parameters['topology']['params']['n']
assert len(list(env.network_agents)) == n assert len(list(env.network_agents)) == n
assert env.now > 0 # It has run assert env.now > 0 # It has run
assert env.now <= cfg.max_time # But not further than allowed assert env.now <= cfg.max_time # But not further than allowed

@ -28,17 +28,22 @@ def get_test_for_sims(sims, path):
if sim.skip_test and not FORCE_TESTS: if sim.skip_test and not FORCE_TESTS:
continue continue
run = True run = True
iterations = sim.max_steps * sim.num_trials
if sim.max_steps is None:
sim.max_steps = 100
iterations = sim.max_steps * sim.iterations
if iterations < 0 or iterations > 1000: if iterations < 0 or iterations > 1000:
sim.max_steps = 100 sim.max_steps = 100
sim.num_trials = 1 sim.iterations = 1
envs = sim.run_simulation(dump=False)
envs = sim.run(dump=False)
assert envs assert envs
for env in envs: for env in envs:
assert env assert env
assert env.now > 0 assert env.now > 0
try: try:
n = sim.model_params["network_params"]["n"] n = sim.parameters["network_params"]["n"]
assert len(list(env.network_agents)) == n assert len(list(env.network_agents)) == n
except KeyError: except KeyError:
pass pass

@ -9,28 +9,29 @@ from soil import exporters
from soil import environment from soil import environment
from soil import simulation from soil import simulation
from soil import agents from soil import agents
from soil import decorators
from mesa import Agent as MesaAgent from mesa import Agent as MesaAgent
class Dummy(exporters.Exporter): class Dummy(exporters.Exporter):
started = False started = False
trials = 0 iterations = 0
ended = False ended = False
total_time = 0 total_time = 0
called_start = 0 called_start = 0
called_trial = 0 called_iteration = 0
called_end = 0 called_end = 0
def sim_start(self): def sim_start(self):
self.__class__.called_start += 1 self.__class__.called_start += 1
self.__class__.started = True self.__class__.started = True
def trial_end(self, env): def iteration_end(self, env, *args, **kwargs):
assert env assert env
self.__class__.trials += 1 self.__class__.iterations += 1
self.__class__.total_time += env.now self.__class__.total_time += env.now
self.__class__.called_trial += 1 self.__class__.called_iteration += 1
def sim_end(self): def sim_end(self):
self.__class__.ended = True self.__class__.ended = True
@ -44,77 +45,78 @@ class Exporters(TestCase):
class SimpleEnv(environment.Environment): class SimpleEnv(environment.Environment):
def init(self): def init(self):
self.add_agent(agent_class=MesaAgent) self.add_agent(agent_class=MesaAgent)
num_trials = 5 iterations = 5
max_time = 2 max_time = 2
s = simulation.Simulation(num_trials=num_trials, max_time=max_time, name="exporter_sim", s = simulation.Simulation(iterations=iterations,
dump=False, model=SimpleEnv) max_time=max_time, name="exporter_sim",
exporters=[Dummy], dump=False, model=SimpleEnv)
for env in s.run_simulation(exporters=[Dummy], dump=False): for env in s.run():
assert len(env.agents) == 1 assert len(env.agents) == 1
assert Dummy.started assert Dummy.started
assert Dummy.ended assert Dummy.ended
assert Dummy.called_start == 1 assert Dummy.called_start == 1
assert Dummy.called_end == 1 assert Dummy.called_end == 1
assert Dummy.called_trial == num_trials assert Dummy.called_iteration == iterations
assert Dummy.trials == num_trials assert Dummy.iterations == iterations
assert Dummy.total_time == max_time * num_trials assert Dummy.total_time == max_time * iterations
def test_writing(self): def test_writing(self):
"""Try to write CSV, sqlite and YAML (without no_dump)""" """Try to write CSV, sqlite and YAML (without no_dump)"""
n_trials = 5 n_iterations = 5
n_nodes = 4 n_nodes = 4
max_time = 2 max_time = 2
config = {
"name": "exporter_sim",
"model_params": {
"network_generator": "complete_graph",
"network_params": {"n": n_nodes},
"agent_class": "CounterModel",
},
"max_time": max_time,
"num_trials": n_trials,
"dump": True,
}
output = io.StringIO() output = io.StringIO()
s = simulation.from_config(config)
tmpdir = tempfile.mkdtemp() tmpdir = tempfile.mkdtemp()
envs = s.run_simulation(
class ConstantEnv(environment.Environment):
@decorators.report
@property
def constant(self):
return 1
s = simulation.Simulation(
model=ConstantEnv,
name="exporter_sim",
exporters=[ exporters=[
exporters.default, exporters.default,
exporters.csv, exporters.csv,
], ],
model_params={
"agent_reporters": {"times": "times"},
"model_reporters": {
"constant": lambda x: 1,
},
},
dump=True,
outdir=tmpdir,
exporter_params={"copy_to": output}, exporter_params={"copy_to": output},
parameters=dict(
network_generator="complete_graph",
network_params={"n": n_nodes},
agent_class="CounterModel",
agent_reporters={"times": "times"},
),
max_time=max_time,
outdir=tmpdir,
iterations=n_iterations,
dump=True,
) )
envs = s.run()
result = output.getvalue() result = output.getvalue()
simdir = os.path.join(tmpdir, s.group or "", s.name) simdir = os.path.join(tmpdir, s.group or "", s.name)
with open(os.path.join(simdir, "{}.dumped.yml".format(s.name))) as f: with open(os.path.join(simdir, "{}.dumped.yml".format(s.id))) as f:
result = f.read() result = f.read()
assert result assert result
try: try:
for e in envs: dbpath = os.path.join(simdir, f"{s.name}.sqlite")
dbpath = os.path.join(simdir, f"{s.name}.sqlite") db = sqlite3.connect(dbpath)
db = sqlite3.connect(dbpath) cur = db.cursor()
cur = db.cursor() agent_entries = cur.execute("SELECT times FROM agents WHERE times > 0").fetchall()
agent_entries = cur.execute("SELECT times FROM agents WHERE times > 0").fetchall() env_entries = cur.execute("SELECT constant from env WHERE constant == 1").fetchall()
env_entries = cur.execute("SELECT constant from env WHERE constant == 1").fetchall() assert len(agent_entries) == n_nodes * n_iterations * max_time
assert len(agent_entries) == n_nodes * n_trials * max_time assert len(env_entries) == n_iterations * (max_time + 1) # +1 for the initial state
assert len(env_entries) == n_trials * max_time
for e in envs:
with open(os.path.join(simdir, "{}.env.csv".format(e.id))) as f: with open(os.path.join(simdir, "{}.env.csv".format(e.id))) as f:
result = f.read() result = f.read()
assert result assert result
finally: finally:
shutil.rmtree(tmpdir) shutil.rmtree(tmpdir)

@ -30,14 +30,14 @@ class TestMain(TestCase):
def test_empty_simulation(self): def test_empty_simulation(self):
"""A simulation with a base behaviour should do nothing""" """A simulation with a base behaviour should do nothing"""
config = { config = {
"model_params": { "parameters": {
"topology": join(ROOT, "test.gexf"), "topology": join(ROOT, "test.gexf"),
"agent_class": MesaAgent, "agent_class": MesaAgent,
}, },
"max_time": 1 "max_time": 1
} }
s = simulation.from_config(config) s = simulation.from_config(config)
s.run_simulation(dump=False) s.run(dump=False)
def test_network_agent(self): def test_network_agent(self):
""" """
@ -45,9 +45,9 @@ class TestMain(TestCase):
agent should be able to update its state.""" agent should be able to update its state."""
config = { config = {
"name": "CounterAgent", "name": "CounterAgent",
"num_trials": 1, "iterations": 1,
"max_time": 2, "max_time": 2,
"model_params": { "parameters": {
"network_params": { "network_params": {
"generator": nx.complete_graph, "generator": nx.complete_graph,
"n": 2, "n": 2,
@ -93,7 +93,7 @@ class TestMain(TestCase):
try: try:
os.chdir(os.path.dirname(pyfile)) os.chdir(os.path.dirname(pyfile))
s = simulation.from_py(pyfile) s = simulation.from_py(pyfile)
env = s.run_simulation(dump=False)[0] env = s.run(dump=False)[0]
for a in env.network_agents: for a in env.network_agents:
skill_level = a["skill_level"] skill_level = a["skill_level"]
if a.node_id == "Torvalds": if a.node_id == "Torvalds":
@ -157,11 +157,11 @@ class TestMain(TestCase):
n_trials = 50 n_trials = 50
max_time = 2 max_time = 2
s = simulation.Simulation( s = simulation.Simulation(
model_params=dict(agents=dict(agent_classes=[CheckRun], k=1)), parameters=dict(agents=dict(agent_classes=[CheckRun], k=1)),
num_trials=n_trials, iterations=n_trials,
max_time=max_time, max_time=max_time,
) )
runs = list(s.run_simulation(dump=False)) runs = list(s.run(dump=False))
over = list(x.now for x in runs if x.now > 2) over = list(x.now for x in runs if x.now > 2)
assert len(runs) == n_trials assert len(runs) == n_trials
assert len(over) == 0 assert len(over) == 0
@ -212,13 +212,24 @@ class TestMain(TestCase):
for sim in sims: for sim in sims:
assert sim assert sim
assert sim.name == "newspread_sim" assert sim.name == "newspread_sim"
assert sim.num_trials == 5 assert sim.iterations == 5
assert sim.max_steps == 300 assert sim.max_steps == 300
assert not sim.dump assert not sim.dump
assert sim.model_params assert sim.parameters
assert "ratio_dumb" in sim.model_params assert "ratio_dumb" in sim.parameters
assert "ratio_herd" in sim.model_params assert "ratio_herd" in sim.parameters
assert "ratio_wise" in sim.model_params assert "ratio_wise" in sim.parameters
assert "network_generator" in sim.model_params assert "network_generator" in sim.parameters
assert "network_params" in sim.model_params assert "network_params" in sim.parameters
assert "prob_neighbor_spread" in sim.model_params assert "prob_neighbor_spread" in sim.parameters
def test_config_matrix(self):
"""It should be possible to specify a matrix of parameters"""
a = [1, 2]
b = [3, 4]
sim = simulation.Simulation(matrix=dict(a=a, b=b))
configs = sim._collect_params()
assert len(configs) == len(a) * len(b)
for i in a:
for j in b:
assert {"a": i, "b": j} in configs
Loading…
Cancel
Save