1
0
mirror of https://github.com/gsi-upm/soil synced 2025-10-17 08:48:30 +00:00

Compare commits

...

10 Commits

Author SHA1 Message Date
J. Fernando Sánchez
50bca88362 Fix pre-release version of v1.0.0rc1 2023-04-20 18:07:42 +02:00
J. Fernando Sánchez
cc238d84ec Pre-release version of v1.0 2023-04-20 17:57:18 +02:00
J. Fernando Sánchez
be65592055 Default parameters terroristnetwork 2023-04-14 20:25:16 +02:00
J. Fernando Sánchez
1d882dcff6 Update easy function 2023-04-14 20:21:34 +02:00
J. Fernando Sánchez
b3e77cbff5 Update python version in gitlab-ci 2023-04-14 20:07:16 +02:00
J. Fernando Sánchez
05748a3250 Update python version requirement 2023-04-14 20:03:47 +02:00
J. Fernando Sánchez
a3fc6a5efa Update README 2023-04-14 19:56:44 +02:00
J. Fernando Sánchez
4e95709188 Update README 2023-04-14 19:53:31 +02:00
J. Fernando Sánchez
feab0ba79e Large set of changes for v0.30
The examples weren't being properly tested in the last commit. When we fixed
that a lot of bugs in the new implementation of environment and agent were
found, which accounts for most of these changes.

The main difference is the mechanism to load simulations from a configuration
file. For that to work, we had to rework our module loading code in
`serialization` and add a `source_file` attribute to configurations (and
simulations, for that matter).
2023-04-14 19:41:24 +02:00
J. Fernando Sánchez
73282530fd Big refactor v0.30
All test pass, except for the TestConfig suite, which is not too critical as the
plan for this version onwards is to avoid configuration as much as possible.
2023-04-09 04:19:24 +02:00
111 changed files with 4659 additions and 90419 deletions

View File

@@ -20,7 +20,7 @@ docker:
test:
tags:
- docker
image: python:3.7
image: python:3.8
stage: test
script:
- pip install -r requirements.txt -r test-requirements.txt
@@ -31,7 +31,7 @@ push_pypi:
- tags
tags:
- docker
image: python:3.7
image: python:3.8
stage: publish
script:
- echo $CI_COMMIT_TAG > soil/VERSION
@@ -44,7 +44,7 @@ check_pypi:
- tags
tags:
- docker
image: python:3.7
image: python:3.8
stage: check_published
script:
- pip install soil==$CI_COMMIT_TAG

View File

@@ -3,22 +3,31 @@ All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## [0.30 UNRELEASED]
## [1.0 UNRELEASED]
Version 1.0 introduced multiple changes, especially on the `Simulation` class and anything related to how configuration is handled.
For an explanation of the general changes in version 1.0, please refer to the file `docs/notes_v1.0.rst`.
### Added
* Simple debugging capabilities in `soil.debugging`, with a custom `pdb.Debugger` subclass that exposes commands to list agents and their status and set breakpoints on states (for FSM agents). Try it with `soil --debug <simulation file>`
* Ability to run
* Ability to
* The `soil.exporters` module to export the results of datacollectors (model.datacollector) into files at the end of trials/simulations
* A modular set of classes for environments/models. Now the ability to configure the agents through an agent definition and a topology through a network configuration is split into two classes (`soil.agents.BaseEnvironment` for agents, `soil.agents.NetworkEnvironment` to add topology).
* FSM agents can now have generators as states. They work similar to normal states, with one caveat. Only `time` values can be yielded, not a state. This is because the state will not change, it will be resumed after the yield, at the appropriate time. The return value *can* be a state, or a `(state, time)` tuple, just like in normal states.
* Environments now have a class method to make them easier to use without a simulation`.run`. Notice that this is different from `run_model`, which is an instance method.
* Ability to run simulations using mesa models
* The `soil.exporters` module to export the results of datacollectors (`model.datacollector`) into files at the end of trials/simulations
* Agents can now have generators as a step function or a state. They work similar to normal functions, with one caveat in the case of `FSM`: only `time` values (or None) can be yielded, not a state. This is because the state will not change, it will be resumed after the yield, at the appropriate time. The return value *can* be a state, or a `(state, time)` tuple, just like in normal states.
* Simulations can now specify a `matrix` with possible values for every simulation parameter. The final parameters will be calculated based on the `parameters` used and a cartesian product (i.e., all possible combinations) of each parameter.
* Simple debugging capabilities in `soil.debugging`, with a custom `pdb.Debugger` subclass that exposes commands to list agents and their status and set breakpoints on states (for FSM agents). Try it with `soil --debug <simulation file>`
### Changed
* Configuration schema is very different now. Check `soil.config` for more information. We are also using Pydantic for (de)serialization.
* There may be more than one topology/network in the simulation
* Ability
* Configuration schema (`Simulation`) is very simplified. All simulations should be checked
* Model / environment variables are expected (but not enforced) to be a single value. This is done to more closely align with mesa
* `Exporter.iteration_end` now takes two parameters: `env` (same as before) and `params` (specific parameters for this environment). We considered including a `parameters` attribute in the environment, but this would not be compatible with mesa.
* `num_trials` renamed to `iterations`
* General renaming of `trial` to `iteration`, to work better with `mesa`
* `model_parameters` renamed to `parameters` in simulation
* Simulation results for every iteration of a simulation with the same name are stored in a single `sqlite` database
### Removed
* Any `tsih` and `History` integration in the main classes. To record the state of environments/agents, just use a datacollector. In some cases this may be slower or consume more memory than the previous system. However, few cases actually used the full potential of the history, and it came at the cost of unnecessary complexity and worse performance for the majority of cases.
## [0.20.7]
### Changed
* Creating a `time.When` from another `time.When` does not nest them anymore (it returns the argument)

View File

@@ -1,20 +1,20 @@
# [SOIL](https://github.com/gsi-upm/soil)
Soil is an extensible and user-friendly Agent-based Social Simulator for Social Networks.
Learn how to run your own simulations with our [documentation](http://soilsim.readthedocs.io).
Follow our [tutorial](examples/tutorial/soil_tutorial.ipynb) to develop your own agent models.
**Note**: Mesa 0.30 introduced many fundamental changes. Check the [documention on how to update your simulations to work with newer versions](docs/migration_0.30.rst)
> **Warning**
> Soil 1.0 introduced many fundamental changes. Check the [documention on how to update your simulations to work with newer versions](docs/notes_v1.0.rst)
## SOIL vs MESA
## Features
SOIL is a batteries-included platform that builds on top of MESA and provides the following out of the box:
* Integration with (social) networks
* The ability to more easily assign agents to your model (and optionally to its network):
* Assigning agents to nodes, and vice versa
* Using a description (e.g., 2 agents of type `Foo`, 10% of the network should be agents of type `Bar`)
* Integration with (social) networks (through `networkx`)
* Convenience functions and methods to easily assign agents to your model (and optionally to its network):
* Following a given distribution (e.g., 2 agents of type `Foo`, 10% of the network should be agents of type `Bar`)
* Based on the topology of the network
* **Several types of abstractions for agents**:
* Finite state machine, where methods can be turned into a state
* Network agents, which have convenience methods to access the model's topology
@@ -33,15 +33,18 @@ SOIL is a batteries-included platform that builds on top of MESA and provides th
* Run models in parallel
* Save results to different formats
* Simulation configuration files
* A command line interface (`soil`), to run multiple
* A command line interface (`soil`), to quickly run simulations with different parameters
* An integrated debugger (`soil --debug`) with custom functions to print agent states and break at specific states
Nevertheless, most features in SOIL have been designed to integrate with plain Mesa.
## Mesa compatibility
SOIL has been redesigned to integrate well with [Mesa](https://github.com/projectmesa/mesa).
For instance, it should be possible to run a `mesa.Model` models using a `soil.Simulation` and the `soil` CLI, or to integrate the `soil.TimedActivation` scheduler on a `mesa.Model`.
Note that some combinations of `mesa` and `soil` components, while technically possible, are much less useful or even wrong.
For instance, you may add any `soil.agent` agent (except for the `soil.NetworkAgent`, as it needs a topology) on a regular `mesa.Model` with a vanilla scheduler from `mesa.time`.
But in that case the agents will not get any of the advanced event-based scheduling, and most agent behaviors that depend on that will greatly vary.
Note that some combinations of `mesa` and `soil` components, while technically possible, are much less useful or might yield surprising results.
For instance, you may add any `soil.agent` agent on a regular `mesa.Model` with a vanilla scheduler from `mesa.time`.
But in that case the agents will not get any of the advanced event-based scheduling, and most agent behaviors that depend on that may not work.
## Changes in version 0.3

View File

@@ -1,7 +1,20 @@
Welcome to Soil's documentation!
================================
Soil is an Agent-based Social Simulator in Python focused on Social Networks.
Soil is an opinionated Agent-based Social Simulator in Python focused on Social Networks.
.. image:: soil.png
:width: 80%
:align: center
Soil can be installed through pip (see more details in the :doc:`installation` page):
.. code:: bash
pip install soil
To get started developing your own simulations and agent behaviors, check out our :doc:`Tutorial <soil_tutorial>` and the `examples on GitHub <https://github.com/gsi-upm/soil/tree/master/examples>.
If you use Soil in your research, do not forget to cite this paper:
@@ -33,8 +46,6 @@ If you use Soil in your research, do not forget to cite this paper:
:caption: Learn more about soil:
installation
quickstart
configuration
Tutorial <soil_tutorial>
..

View File

@@ -1,7 +1,10 @@
Installation
------------
The easiest way to install Soil is through pip, with Python >= 3.4:
Through pip
===========
The easiest way to install Soil is through pip, with Python >= 3.8:
.. code:: bash
@@ -25,4 +28,38 @@ Or, if you're using using soil programmatically:
import soil
print(soil.__version__)
The latest version can be installed through `GitHub <https://github.com/gsi-upm/soil>`_ or `GitLab <https://lab.gsi.upm.es/soil/soil.git>`_.
Web UI
======
Soil also includes a web server that allows you to upload your simulations, change parameters, and visualize the results, including a timeline of the network.
To make it work, you have to install soil like this:
.. code::
pip install soil[web]
Once installed, the soil web UI can be run in two ways:
.. code::
soil-web
# OR
python -m soil.web
Development
===========
The latest version can be downloaded from `GitHub <https://github.com/gsi-upm/soil>`_ and installed manually:
.. code:: bash
git clone https://github.com/gsi-upm/soil
cd soil
python -m venv .venv
source .venv/bin/activate
pip install --editable .

35
docs/notes_v1.0.rst Normal file
View File

@@ -0,0 +1,35 @@
What are the main changes in version 1.0?
#########################################
Version 1.0 is a major rewrite of the Soil system, focused on simplifying the API, aligning it with Mesa, and making it easier to use.
Unfortunately, this comes at the cost of backwards compatibility.
We drew several lessons from the previous version of Soil, and tried to address them in this version.
Mainly:
- The split between simulation configuration and simulation code was overly complicated for most use cases. As a result, most users ended up reusing configuration.
- Storing **all** the simulation data in a database is costly and unnecessary for most use cases. For most use cases, only a handful of variables need to be stored. This fits nicely with Mesa's data collection system.
- The API was too complex, and it was difficult to understand how to use it.
- Most parts of the API were not aligned with Mesa, which made it difficult to use Mesa's features or to integrate Soil modules with Mesa code, especially for newcomers.
- Many parts of the API were tightly coupled, which made it difficult to find bugs, test the system and add new features.
The 0.30 rewrite should provide a middle ground between Soil's opinionated approach and Mesa's flexibility.
The new Soil is less configuration-centric.
It aims to provide more modular and convenient functions, most of which can be used in vanilla Mesa.
How are agents assigned to nodes in the network
###############################################
The constructor of the `NetworkAgent` class has two arguments: `node_id` and `topology`.
If `topology` is not provided, it will default to `self.model.topology`.
This assignment might err if the model does not have a `topology` attribute, but most Soil environments derive from `NetworkEnvironment`, so they include a topology by default.
If `node_id` is not provided, a random node will be selected from the topology, until a node with no agent is found.
Then, the `node_id` of that node is assigned to the agent.
If no node with no agent is found, a new node is automatically added to the topology.
Can Soil environments include more than one network / topology?
###############################################################
Yes, but each network has to be included manually.
Somewhere between 0.20 and 0.30 we included the ability to include multiple networks, but it was deemed too complex and was removed.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 7.0 KiB

BIN
docs/output_30_0.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 4.7 KiB

BIN
docs/output_34_0.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 11 KiB

BIN
docs/output_49_0.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 26 KiB

BIN
docs/output_50_0.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 31 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 14 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 14 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 14 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 14 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 16 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 16 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 15 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 15 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 16 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 16 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 16 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 16 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 13 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 13 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 13 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 13 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 13 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 13 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 13 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 13 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 13 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 13 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 15 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 14 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 14 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 5.3 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 17 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 17 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 16 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 11 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 19 KiB

View File

@@ -1,93 +0,0 @@
Quickstart
----------
This section shows how to run your first simulation with Soil.
For installation instructions, see :doc:`installation`.
There are mainly two parts in a simulation: agent classes and simulation configuration.
An agent class defines how the agent will behave throughout the simulation.
The configuration includes things such as number of agents to use and their type, network topology to use, etc.
.. image:: soil.png
:width: 80%
:align: center
Soil includes several agent classes in the ``soil.agents`` module, and we will use them in this quickstart.
If you are interested in developing your own agents classes, see :doc:`soil_tutorial`.
Configuration
=============
To get you started, we will use this configuration (:download:`download the file <quickstart.yml>` directly):
.. literalinclude:: quickstart.yml
:language: yaml
The agent type used, SISa, is a very simple model.
It only has three states (neutral, content and discontent),
Its parameters are the probabilities to change from one state to another, either spontaneously or because of contagion from neighboring agents.
Running the simulation
======================
To see the simulation in action, simply point soil to the configuration, and tell it to store the graph and the history of agent states and environment parameters at every point.
.. code::
soil --graph --csv quickstart.yml [13:35:29]
INFO:soil:Using config(s): quickstart
INFO:soil:Dumping results to soil_output/quickstart : ['csv', 'gexf']
INFO:soil:Starting simulation quickstart at 13:35:30.
INFO:soil:Starting Simulation quickstart trial 0 at 13:35:30.
INFO:soil:Finished Simulation quickstart trial 0 at 13:35:49 in 19.43677067756653 seconds
INFO:soil:Starting Dumping simulation quickstart trial 0 at 13:35:49.
INFO:soil:Finished Dumping simulation quickstart trial 0 at 13:35:51 in 1.7733407020568848 seconds
INFO:soil:Dumping results to soil_output/quickstart
INFO:soil:Finished simulation quickstart at 13:35:51 in 21.29862952232361 seconds
The ``CSV`` file should look like this:
.. code::
agent_id,t_step,key,value
env,0,neutral_discontent_spon_prob,0.05
env,0,neutral_discontent_infected_prob,0.1
env,0,neutral_content_spon_prob,0.2
env,0,neutral_content_infected_prob,0.4
env,0,discontent_neutral,0.2
env,0,discontent_content,0.05
env,0,content_discontent,0.05
env,0,variance_d_c,0.05
env,0,variance_c_d,0.1
Results and visualization
=========================
The environment variables are marked as ``agent_id`` env.
Th exported values are only stored when they change.
To find out how to get every key and value at every point in the simulation, check out the :doc:`soil_tutorial`.
The dynamic graph is exported as a .gexf file which could be visualized with
`Gephi <https://gephi.org/users/download/>`__.
Now it is your turn to experiment with the simulation.
Change some of the parameters, such as the number of agents, the probability of becoming content, or the type of network, and see how the results change.
Soil also includes a web server that allows you to upload your simulations, change parameters, and visualize the results, including a timeline of the network.
To make it work, you have to install soil like this:
.. code::
pip install soil[web]
Once installed, the soil web UI can be run in two ways:
.. code::
soil-web
# OR
python -m soil.web

View File

@@ -1,33 +0,0 @@
---
name: quickstart
num_trials: 1
max_time: 1000
model_params:
agents:
- agent_class: SISaModel
topology: true
state:
id: neutral
weight: 1
- agent_class: SISaModel
topology: true
state:
id: content
weight: 2
topology:
params:
n: 100
k: 5
p: 0.2
generator: newman_watts_strogatz_graph
neutral_discontent_spon_prob: 0.05
neutral_discontent_infected_prob: 0.1
neutral_content_spon_prob: 0.2
neutral_content_infected_prob: 0.4
discontent_neutral: 0.2
discontent_content: 0.05
content_discontent: 0.05
variance_d_c: 0.05
variance_c_d: 0.1
content_neutral: 0.1
standard_variance: 0.1

File diff suppressed because it is too large Load Diff

File diff suppressed because one or more lines are too long

File diff suppressed because it is too large Load Diff

View File

@@ -1,54 +0,0 @@
---
version: '2'
name: simple
group: tests
dir_path: "/tmp/"
num_trials: 3
max_steps: 100
interval: 1
seed: "CompleteSeed!"
model_class: Environment
model_params:
am_i_complete: true
topology:
params:
generator: complete_graph
n: 12
environment:
agents:
agent_class: CounterModel
topology: true
state:
times: 1
# In this group we are not specifying any topology
fixed:
- name: 'Environment Agent 1'
agent_class: BaseAgent
group: environment
topology: false
hidden: true
state:
times: 10
- agent_class: CounterModel
id: 0
group: fixed_counters
state:
times: 1
total: 0
- agent_class: CounterModel
group: fixed_counters
id: 1
distribution:
- agent_class: CounterModel
weight: 1
group: distro_counters
state:
times: 3
- agent_class: AggregatedCounter
weight: 0.2
override:
- filter:
agent_class: AggregatedCounter
n: 2
state:
times: 5

View File

@@ -1,16 +0,0 @@
---
name: custom-generator
description: Using a custom generator for the network
num_trials: 3
max_steps: 100
interval: 1
network_params:
generator: mymodule.mygenerator
# These are custom parameters
n: 10
n_edges: 5
network_agents:
- agent_class: CounterModel
weight: 1
state:
state_id: 0

View File

@@ -1,6 +1,7 @@
from networkx import Graph
import random
import networkx as nx
from soil import Simulation, Environment, CounterModel, parameters
def mygenerator(n=5, n_edges=5):
@@ -20,3 +21,19 @@ def mygenerator(n=5, n_edges=5):
n_out = random.choice(nodes)
G.add_edge(n_in, n_out)
return G
class GeneratorEnv(Environment):
"""Using a custom generator for the network"""
generator: parameters.function = staticmethod(mygenerator)
def init(self):
self.create_network(generator=self.generator, n=10, n_edges=5)
self.add_agents(CounterModel)
sim = Simulation(model=GeneratorEnv, max_steps=10, interval=1)
if __name__ == '__main__':
sim.run(dump=False)

View File

@@ -4,8 +4,7 @@ from soil.time import Delta
class Fibonacci(FSM):
"""Agent that only executes in t_steps that are Fibonacci numbers"""
defaults = {"prev": 1}
prev = 1
@default_state
@state
@@ -25,23 +24,18 @@ class Odds(FSM):
return None, Delta(1 + self.now % 2)
from soil import Simulation
from soil import Environment, Simulation
from networkx import complete_graph
simulation = Simulation(
model_params={
'agents':[
{'agent_class': Fibonacci, 'node_id': 0},
{'agent_class': Odds, 'node_id': 1}
],
'topology': {
'params': {
'generator': 'complete_graph',
'n': 2
}
},
},
max_time=100,
)
class TimeoutsEnv(Environment):
def init(self):
self.create_network(generator=complete_graph, n=2)
self.add_agent(agent_class=Fibonacci, node_id=0)
self.add_agent(agent_class=Odds, node_id=1)
sim = Simulation(model=TimeoutsEnv, max_steps=10, interval=1)
if __name__ == "__main__":
simulation.run(dry_run=True)
sim.run(dump=False)

View File

@@ -2,6 +2,8 @@ This example can be run like with command-line options, like this:
```bash
python cars.py --level DEBUG -e summary --csv
#or
soil cars.py -e summary
```
This will set the `CSV` (save the agent and model data to a CSV) and `summary` (print the a summary of the data to stdout) exporters, and set the log level to DEBUG.

View File

@@ -56,41 +56,25 @@ class City(EventedEnvironment):
:param int height: Height of the internal grid
:param int width: Width of the internal grid
"""
n_cars = 1
n_passengers = 10
height = 100
width = 100
def init(self):
self.grid = MultiGrid(width=self.width, height=self.height, torus=False)
if not self.agents:
self.add_agents(Driver, k=self.n_cars)
self.add_agents(Passenger, k=self.n_passengers)
def __init__(
self,
*args,
n_cars=1,
n_passengers=10,
height=100,
width=100,
agents=None,
model_reporters=None,
**kwargs,
):
self.grid = MultiGrid(width=width, height=height, torus=False)
if agents is None:
agents = []
for i in range(n_cars):
agents.append({"agent_class": Driver})
for i in range(n_passengers):
agents.append({"agent_class": Passenger})
model_reporters = model_reporters or {
"earnings": "total_earnings",
"n_passengers": "number_passengers",
}
print("REPORTERS", model_reporters)
super().__init__(
*args, agents=agents, model_reporters=model_reporters, **kwargs
)
for agent in self.agents:
self.grid.place_agent(agent, (0, 0))
self.grid.move_to_empty(agent)
self.total_earnings = 0
self.add_model_reporter("total_earnings")
@property
def total_earnings(self):
return sum(d.earnings for d in self.agents(agent_class=Driver))
@report
@property
def number_passengers(self):
return self.count_agents(agent_class=Passenger)
@@ -110,9 +94,9 @@ class Driver(Evented, FSM):
def check_passengers(self):
"""If there are no more passengers, stop forever"""
c = self.count_agents(agent_class=Passenger)
self.info(f"Passengers left {c}")
self.debug(f"Passengers left {c}")
if not c:
self.die()
self.die("No more passengers")
@default_state
@state
@@ -145,17 +129,21 @@ class Driver(Evented, FSM):
@state
def driving(self):
"""The journey has been accepted. Pick them up and take them to their destination"""
self.info(f"Driving towards Passenger {self.journey.passenger.unique_id}")
while self.move_towards(self.journey.origin):
yield
self.info(f"Driving {self.journey.passenger.unique_id} from {self.journey.origin} to {self.journey.destination}")
while self.move_towards(self.journey.destination, with_passenger=True):
yield
self.info("Arrived at destination")
self.earnings += self.journey.tip
self.model.total_earnings += self.journey.tip
self.check_passengers()
return self.wandering
def move_towards(self, target, with_passenger=False):
"""Move one cell at a time towards a target"""
self.info(f"Moving { self.pos } -> { target }")
self.debug(f"Moving { self.pos } -> { target }")
if target[0] == self.pos[0] and target[1] == self.pos[1]:
return False
@@ -189,8 +177,8 @@ class Passenger(Evented, FSM):
@state
def asking(self):
destination = (
self.random.randint(0, self.model.grid.height),
self.random.randint(0, self.model.grid.width),
self.random.randint(0, self.model.grid.height-1),
self.random.randint(0, self.model.grid.width-1),
)
self.journey = None
journey = Journey(
@@ -202,19 +190,21 @@ class Passenger(Evented, FSM):
timeout = 60
expiration = self.now + timeout
self.info(f"Asking for journey at: { self.pos }")
self.model.broadcast(journey, ttl=timeout, sender=self, agent_class=Driver)
while not self.journey:
self.info(f"Passenger at: { self.pos }. Checking for responses.")
self.debug(f"Waiting for responses at: { self.pos }")
try:
# This will call check_messages behind the scenes, and the agent's status will be updated
# If you want to avoid that, you can call it with: check=False
yield self.received(expiration=expiration)
except events.TimedOut:
self.info(f"Passenger at: { self.pos }. Asking for journey.")
self.info(f"Still no response. Waiting at: { self.pos }")
self.model.broadcast(
journey, ttl=timeout, sender=self, agent_class=Driver
)
expiration = self.now + timeout
self.info(f"Got a response! Waiting for driver")
return self.driving_home
@state
@@ -228,16 +218,14 @@ class Passenger(Evented, FSM):
except events.TimedOut:
pass
self.info("Got home safe!")
self.die()
self.die("Got home safe!")
simulation = Simulation(
name="RideHailing",
model_class=City,
model_params={"n_passengers": 2},
seed="carsSeed",
)
simulation = Simulation(name="RideHailing",
model=City,
seed="carsSeed",
max_time=1000,
parameters=dict(n_passengers=2))
if __name__ == "__main__":
simulation.run()
easy(simulation)

View File

@@ -1,19 +0,0 @@
---
name: mesa_sim
group: tests
dir_path: "/tmp"
num_trials: 3
max_steps: 100
interval: 1
seed: '1'
model_class: social_wealth.MoneyEnv
model_params:
generator: social_wealth.graph_generator
agents:
topology: true
distribution:
- agent_class: social_wealth.SocialMoneyAgent
weight: 1
N: 10
width: 50
height: 50

View File

@@ -0,0 +1,7 @@
from soil import Simulation
from social_wealth import MoneyEnv, graph_generator
sim = Simulation(name="mesa_sim", dump=False, max_steps=10, interval=2, model=MoneyEnv, parameters=dict(generator=graph_generator, N=10, width=50, height=50))
if __name__ == "__main__":
sim.run()

View File

@@ -1,5 +1,5 @@
from mesa.visualization.ModularVisualization import ModularServer
from soil.visualization import UserSettableParameter
from mesa.visualization.UserParam import Slider, Choice
from mesa.visualization.modules import ChartModule, NetworkModule, CanvasGrid
from social_wealth import MoneyEnv, graph_generator, SocialMoneyAgent
import networkx as nx
@@ -63,9 +63,8 @@ chart = ChartModule(
[{"Label": "Gini", "Color": "Black"}], data_collector_name="datacollector"
)
model_params = {
"N": UserSettableParameter(
"slider",
parameters = {
"N": Slider(
"N",
5,
1,
@@ -73,8 +72,7 @@ model_params = {
1,
description="Choose how many agents to include in the model",
),
"height": UserSettableParameter(
"slider",
"height": Slider(
"height",
5,
5,
@@ -82,8 +80,7 @@ model_params = {
1,
description="Grid height",
),
"width": UserSettableParameter(
"slider",
"width": Slider(
"width",
5,
5,
@@ -91,8 +88,7 @@ model_params = {
1,
description="Grid width",
),
"agent_class": UserSettableParameter(
"choice",
"agent_class": Choice(
"Agent class",
value="MoneyAgent",
choices=["MoneyAgent", "SocialMoneyAgent"],
@@ -102,12 +98,12 @@ model_params = {
canvas_element = CanvasGrid(
gridPortrayal, model_params["width"].value, model_params["height"].value, 500, 500
gridPortrayal, parameters["width"].value, parameters["height"].value, 500, 500
)
server = ModularServer(
MoneyEnv, [grid, chart, canvas_element], "Money Model", model_params
MoneyEnv, [grid, chart, canvas_element], "Money Model", parameters
)
server.port = 8521

View File

@@ -53,7 +53,7 @@ class MoneyAgent(MesaAgent):
self.give_money()
class SocialMoneyAgent(NetworkAgent, MoneyAgent):
class SocialMoneyAgent(MoneyAgent, NetworkAgent):
wealth = 1
def give_money(self):

View File

@@ -1,133 +0,0 @@
---
default_state: {}
environment_agents: []
environment_params:
prob_neighbor_spread: 0.0
prob_tv_spread: 0.01
interval: 1
max_steps: 300
name: Sim_all_dumb
network_agents:
- agent_class: newsspread.DumbViewer
state:
has_tv: false
weight: 1
- agent_class: newsspread.DumbViewer
state:
has_tv: true
weight: 1
network_params:
generator: barabasi_albert_graph
n: 500
m: 5
num_trials: 50
---
default_state: {}
environment_agents: []
environment_params:
prob_neighbor_spread: 0.0
prob_tv_spread: 0.01
interval: 1
max_steps: 300
name: Sim_half_herd
network_agents:
- agent_class: newsspread.DumbViewer
state:
has_tv: false
weight: 1
- agent_class: newsspread.DumbViewer
state:
has_tv: true
weight: 1
- agent_class: newsspread.HerdViewer
state:
has_tv: false
weight: 1
- agent_class: newsspread.HerdViewer
state:
has_tv: true
weight: 1
network_params:
generator: barabasi_albert_graph
n: 500
m: 5
num_trials: 50
---
default_state: {}
environment_agents: []
environment_params:
prob_neighbor_spread: 0.0
prob_tv_spread: 0.01
interval: 1
max_steps: 300
name: Sim_all_herd
network_agents:
- agent_class: newsspread.HerdViewer
state:
has_tv: true
state_id: neutral
weight: 1
- agent_class: newsspread.HerdViewer
state:
has_tv: true
state_id: neutral
weight: 1
network_params:
generator: barabasi_albert_graph
n: 500
m: 5
num_trials: 50
---
default_state: {}
environment_agents: []
environment_params:
prob_neighbor_spread: 0.0
prob_tv_spread: 0.01
prob_neighbor_cure: 0.1
interval: 1
max_steps: 300
name: Sim_wise_herd
network_agents:
- agent_class: newsspread.HerdViewer
state:
has_tv: true
state_id: neutral
weight: 1
- agent_class: newsspread.WiseViewer
state:
has_tv: true
weight: 1
network_params:
generator: barabasi_albert_graph
n: 500
m: 5
num_trials: 50
---
default_state: {}
environment_agents: []
environment_params:
prob_neighbor_spread: 0.0
prob_tv_spread: 0.01
prob_neighbor_cure: 0.1
interval: 1
max_steps: 300
name: Sim_all_wise
network_agents:
- agent_class: newsspread.WiseViewer
state:
has_tv: true
state_id: neutral
weight: 1
- agent_class: newsspread.WiseViewer
state:
has_tv: true
weight: 1
network_params:
generator: barabasi_albert_graph
n: 500
m: 5
network_params:
generator: barabasi_albert_graph
n: 500
m: 5
num_trials: 50

View File

@@ -1,87 +0,0 @@
from soil.agents import FSM, NetworkAgent, state, default_state, prob
import logging
class DumbViewer(FSM, NetworkAgent):
"""
A viewer that gets infected via TV (if it has one) and tries to infect
its neighbors once it's infected.
"""
prob_neighbor_spread = 0.5
prob_tv_spread = 0.1
has_been_infected = False
@default_state
@state
def neutral(self):
if self["has_tv"]:
if self.prob(self.model["prob_tv_spread"]):
return self.infected
if self.has_been_infected:
return self.infected
@state
def infected(self):
for neighbor in self.get_neighbors(state_id=self.neutral.id):
if self.prob(self.model["prob_neighbor_spread"]):
neighbor.infect()
def infect(self):
"""
This is not a state. It is a function that other agents can use to try to
infect this agent. DumbViewer always gets infected, but other agents like
HerdViewer might not become infected right away
"""
self.has_been_infected = True
class HerdViewer(DumbViewer):
"""
A viewer whose probability of infection depends on the state of its neighbors.
"""
def infect(self):
"""Notice again that this is NOT a state. See DumbViewer.infect for reference"""
infected = self.count_neighbors(state_id=self.infected.id)
total = self.count_neighbors()
prob_infect = self.model["prob_neighbor_spread"] * infected / total
self.debug("prob_infect", prob_infect)
if self.prob(prob_infect):
self.has_been_infected = True
class WiseViewer(HerdViewer):
"""
A viewer that can change its mind.
"""
defaults = {
"prob_neighbor_spread": 0.5,
"prob_neighbor_cure": 0.25,
"prob_tv_spread": 0.1,
}
@state
def cured(self):
prob_cure = self.model["prob_neighbor_cure"]
for neighbor in self.get_neighbors(state_id=self.infected.id):
if self.prob(prob_cure):
try:
neighbor.cure()
except AttributeError:
self.debug("Viewer {} cannot be cured".format(neighbor.id))
def cure(self):
self.has_been_cured = True
@state
def infected(self):
if self.has_been_cured:
return self.cured
cured = max(self.count_neighbors(self.cured.id), 1.0)
infected = max(self.count_neighbors(self.infected.id), 1.0)
prob_cure = self.model["prob_neighbor_cure"] * (cured / infected)
if self.prob(prob_cure):
return self.cured

View File

@@ -0,0 +1,134 @@
from soil.agents import FSM, NetworkAgent, state, default_state, prob
from soil.parameters import *
import logging
from soil.environment import Environment
class DumbViewer(FSM, NetworkAgent):
"""
A viewer that gets infected via TV (if it has one) and tries to infect
its neighbors once it's infected.
"""
has_been_infected: bool = False
has_tv: bool = False
@default_state
@state
def neutral(self):
if self.has_tv:
if self.prob(self.get("prob_tv_spread")):
return self.infected
if self.has_been_infected:
return self.infected
@state
def infected(self):
for neighbor in self.get_neighbors(state_id=self.neutral.id):
if self.prob(self.get("prob_neighbor_spread")):
neighbor.infect()
def infect(self):
"""
This is not a state. It is a function that other agents can use to try to
infect this agent. DumbViewer always gets infected, but other agents like
HerdViewer might not become infected right away
"""
self.has_been_infected = True
class HerdViewer(DumbViewer):
"""
A viewer whose probability of infection depends on the state of its neighbors.
"""
def infect(self):
"""Notice again that this is NOT a state. See DumbViewer.infect for reference"""
infected = self.count_neighbors(state_id=self.infected.id)
total = self.count_neighbors()
prob_infect = self.get("prob_neighbor_spread") * infected / total
self.debug("prob_infect", prob_infect)
if self.prob(prob_infect):
self.has_been_infected = True
class WiseViewer(HerdViewer):
"""
A viewer that can change its mind.
"""
@state
def cured(self):
prob_cure = self.get("prob_neighbor_cure")
for neighbor in self.get_neighbors(state_id=self.infected.id):
if self.prob(prob_cure):
try:
neighbor.cure()
except AttributeError:
self.debug("Viewer {} cannot be cured".format(neighbor.id))
def cure(self):
self.has_been_cured = True
@state
def infected(self):
if self.has_been_cured:
return self.cured
cured = max(self.count_neighbors(self.cured.id), 1.0)
infected = max(self.count_neighbors(self.infected.id), 1.0)
prob_cure = self.get("prob_neighbor_cure") * (cured / infected)
if self.prob(prob_cure):
return self.cured
class NewsSpread(Environment):
ratio_dumb: probability = 1,
ratio_herd: probability = 0,
ratio_wise: probability = 0,
prob_tv_spread: probability = 0.1,
prob_neighbor_spread: probability = 0.1,
prob_neighbor_cure: probability = 0.05,
def init(self):
self.populate_network([DumbViewer, HerdViewer, WiseViewer],
[self.ratio_dumb, self.ratio_herd, self.ratio_wise])
from itertools import product
from soil import Simulation
# We want to investigate the effect of different agent distributions on the spread of news.
# To do that, we will run different simulations, with a varying ratio of DumbViewers, HerdViewers, and WiseViewers
# Because the effect of these agents might also depend on the network structure, we will run our simulations on two different networks:
# one with a small-world structure and one with a connected structure.
counter = 0
for [r1, r2] in product([0, 0.5, 1.0], repeat=2):
for (generator, netparams) in {
"barabasi_albert_graph": {"m": 5},
"erdos_renyi_graph": {"p": 0.1},
}.items():
print(r1, r2, 1-r1-r2, generator)
# Create new simulation
netparams["n"] = 500
Simulation(
name='newspread_sim',
model=NewsSpread,
parameters=dict(
ratio_dumb=r1,
ratio_herd=r2,
ratio_wise=1-r1-r2,
network_generator=generator,
network_params=netparams,
prob_neighbor_spread=0,
),
iterations=5,
max_steps=300,
dump=False,
).run()
counter += 1
# Run all the necessary instances
print(f"A total of {counter} simulations were run.")

View File

@@ -1,7 +1,7 @@
"""
Example of a fully programmatic simulation, without definition files.
"""
from soil import Simulation, agents
from soil import Simulation, Environment, agents
from networkx import Graph
import logging
@@ -14,7 +14,7 @@ def mygenerator():
return G
class MyAgent(agents.FSM):
class MyAgent(agents.NetworkAgent, agents.FSM):
times_run = 0
@agents.default_state
@agents.state
@@ -25,26 +25,22 @@ class MyAgent(agents.FSM):
self.info("This runs 2/10 times on average")
class ProgrammaticEnv(Environment):
def init(self):
self.create_network(generator=mygenerator)
assert len(self.G)
self.populate_network(agent_class=MyAgent)
self.add_agent_reporter('times_run')
simulation = Simulation(
name="Programmatic",
model_params={
'topology': {
'params': {
'generator': mygenerator
},
},
'agents': {
'distribution': [{
'agent_class': MyAgent,
'topology': True,
}]
}
},
model=ProgrammaticEnv,
seed='Program',
agent_reporters={'times_run': 'times_run'},
num_trials=1,
iterations=1,
max_time=100,
dry_run=True,
dump=False,
)
if __name__ == "__main__":

View File

@@ -1,26 +0,0 @@
---
name: pubcrawl
num_trials: 3
max_steps: 10
dump: false
network_params:
# Generate 100 empty nodes. They will be assigned a network agent
generator: empty_graph
n: 30
network_agents:
- agent_class: pubcrawl.Patron
description: Extroverted patron
state:
openness: 1.0
weight: 9
- agent_class: pubcrawl.Patron
description: Introverted patron
state:
openness: 0.1
weight: 1
environment_agents:
- agent_class: pubcrawl.Police
environment_class: pubcrawl.CityPubs
environment_params:
altercations: 0
number_of_pubs: 3

View File

@@ -1,6 +1,7 @@
from soil.agents import FSM, NetworkAgent, state, default_state
from soil import Environment
from soil import Environment, Simulation, parameters
from itertools import islice
import networkx as nx
import logging
@@ -8,19 +9,24 @@ class CityPubs(Environment):
"""Environment with Pubs"""
level = logging.INFO
def __init__(self, *args, number_of_pubs=3, pub_capacity=10, **kwargs):
super(CityPubs, self).__init__(*args, **kwargs)
pubs = {}
for i in range(number_of_pubs):
number_of_pubs: parameters.Integer = 3
ratio_extroverted: parameters.probability = 0.1
pub_capacity: parameters.Integer = 10
def init(self):
self.pubs = {}
for i in range(self.number_of_pubs):
newpub = {
"name": "The awesome pub #{}".format(i),
"open": True,
"capacity": pub_capacity,
"capacity": self.pub_capacity,
"occupancy": 0,
}
pubs[newpub["name"]] = newpub
self["pubs"] = pubs
self.pubs[newpub["name"]] = newpub
self.add_agent(agent_class=Police)
self.populate_network([Patron.w(openness=0.1), Patron.w(openness=1)],
[self.ratio_extroverted, 1-self.ratio_extroverted])
assert all(["agent" in node and isinstance(node["agent"], Patron) for (_, node) in self.G.nodes(data=True)])
def enter(self, pub_id, *nodes):
"""Agents will try to enter. The pub checks if it is possible"""
@@ -146,10 +152,10 @@ class Patron(FSM, NetworkAgent):
continue
if friend.befriend(self):
self.befriend(friend, force=True)
self.debug("Hooray! new friend: {}".format(friend.id))
self.debug("Hooray! new friend: {}".format(friend.unique_id))
befriended = True
else:
self.debug("{} does not want to be friends".format(friend.id))
self.debug("{} does not want to be friends".format(friend.unique_id))
return befriended
@@ -163,13 +169,27 @@ class Police(FSM):
def patrol(self):
drunksters = list(self.get_agents(drunk=True, state_id=Patron.drunk_in_pub.id))
for drunk in drunksters:
self.info("Kicking out the trash: {}".format(drunk.id))
self.info("Kicking out the trash: {}".format(drunk.unique_id))
drunk.kick_out()
else:
self.info("No trash to take out. Too bad.")
if __name__ == "__main__":
from soil import run_from_config
sim = Simulation(
model=CityPubs,
name="pubcrawl",
iterations=3,
max_steps=10,
dump=False,
parameters=dict(
network_generator=nx.empty_graph,
network_params={"n": 30},
model=CityPubs,
altercations=0,
number_of_pubs=3,
)
)
run_from_config("pubcrawl.yml", dry_run=True, dump=None, parallel=False)
if __name__ == "__main__":
sim.run(parallel=False)

View File

@@ -1,42 +0,0 @@
---
version: '2'
name: rabbits_basic
num_trials: 1
seed: MySeed
description: null
group: null
interval: 1.0
max_time: 100
model_class: rabbit_agents.RabbitEnv
model_params:
agents:
topology: true
distribution:
- agent_class: rabbit_agents.Male
weight: 1
- agent_class: rabbit_agents.Female
weight: 1
fixed:
- agent_class: rabbit_agents.RandomAccident
topology: false
hidden: true
state:
group: environment
state:
group: network
mating_prob: 0.1
prob_death: 0.001
topology:
fixed:
directed: true
links: []
nodes:
- id: 1
- id: 0
model_reporters:
num_males: 'num_males'
num_females: 'num_females'
num_rabbits: |
py:lambda env: env.num_males + env.num_females
extra:
visualization_params: {}

View File

@@ -1,42 +0,0 @@
---
version: '2'
name: rabbits_improved
num_trials: 1
seed: MySeed
description: null
group: null
interval: 1.0
max_time: 100
model_class: rabbit_agents.RabbitEnv
model_params:
agents:
topology: true
distribution:
- agent_class: rabbit_agents.Male
weight: 1
- agent_class: rabbit_agents.Female
weight: 1
fixed:
- agent_class: rabbit_agents.RandomAccident
topology: false
hidden: true
state:
group: environment
state:
group: network
mating_prob: 0.1
prob_death: 0.001
topology:
fixed:
directed: true
links: []
nodes:
- id: 1
- id: 0
model_reporters:
num_males: 'num_males'
num_females: 'num_females'
num_rabbits: |
py:lambda env: env.num_males + env.num_females
extra:
visualization_params: {}

View File

@@ -1,23 +1,20 @@
from soil import FSM, state, default_state, BaseAgent, NetworkAgent, Environment
from soil import FSM, state, default_state, BaseAgent, NetworkAgent, Environment, Simulation
from soil.time import Delta
from enum import Enum
from collections import Counter
import logging
import math
from rabbits_basic_sim import RabbitEnv
class RabbitEnv(Environment):
@property
def num_rabbits(self):
return self.count_agents(agent_class=Rabbit)
@property
def num_males(self):
return self.count_agents(agent_class=Male)
@property
def num_females(self):
return self.count_agents(agent_class=Female)
class RabbitsImprovedEnv(RabbitEnv):
def init(self):
"""Initialize the environment with the new versions of the agents"""
a1 = self.add_node(Male)
a2 = self.add_node(Female)
a1.add_edge(a2)
self.add_agent(RandomAccident)
class Rabbit(FSM, NetworkAgent):
@@ -150,8 +147,7 @@ class RandomAccident(BaseAgent):
self.debug("Rabbits alive: {}".format(rabbits_alive))
if __name__ == "__main__":
from soil import easy
sim = Simulation(model=RabbitsImprovedEnv, max_time=100, seed="MySeed", iterations=1)
with easy("rabbits.yml") as sim:
sim.run()
if __name__ == "__main__":
sim.run()

View File

@@ -1,20 +1,29 @@
from soil import FSM, state, default_state, BaseAgent, NetworkAgent, Environment
from soil import FSM, state, default_state, BaseAgent, NetworkAgent, Environment, Simulation, report, parameters as params
from collections import Counter
import logging
import math
class RabbitEnv(Environment):
prob_death = 1e-100
prob_death: params.probability = 1e-100
def init(self):
a1 = self.add_node(Male)
a2 = self.add_node(Female)
a1.add_edge(a2)
self.add_agent(RandomAccident)
@report
@property
def num_rabbits(self):
return self.count_agents(agent_class=Rabbit)
@report
@property
def num_males(self):
return self.count_agents(agent_class=Male)
@report
@property
def num_females(self):
return self.count_agents(agent_class=Female)
@@ -145,8 +154,8 @@ class RandomAccident(BaseAgent):
self.debug("Rabbits alive: {}".format(rabbits_alive))
if __name__ == "__main__":
from soil import easy
with easy("rabbits.yml") as sim:
sim.run()
sim = Simulation(model=RabbitEnv, max_time=100, seed="MySeed", iterations=1)
if __name__ == "__main__":
sim.run()

View File

@@ -2,7 +2,7 @@
Example of setting a
Example of a fully programmatic simulation, without definition files.
"""
from soil import Simulation, agents
from soil import Simulation, agents, Environment
from soil.time import Delta
@@ -29,14 +29,18 @@ class MyAgent(agents.FSM):
return None, Delta(self.random.expovariate(1 / 16))
class RandomEnv(Environment):
def init(self):
self.add_agent(agent_class=MyAgent)
s = Simulation(
name="Programmatic",
model_params={
'agents': [{'agent_class': MyAgent}],
},
num_trials=1,
model=RandomEnv,
iterations=1,
max_time=100,
dry_run=True,
dump=False,
)

View File

@@ -1,30 +0,0 @@
---
sampler:
method: "SALib.sample.morris.sample"
N: 10
template:
group: simple
num_trials: 1
interval: 1
max_steps: 2
seed: "CompleteSeed!"
dump: false
model_params:
network_params:
generator: complete_graph
n: 10
network_agents:
- agent_class: CounterModel
weight: "{{ x1 }}"
state:
state_id: 0
- agent_class: AggregatedCounter
weight: "{{ 1 - x1 }}"
name: "{{ x3 }}"
skip_test: true
vars:
bounds:
x1: [0, 1]
x2: [1, 2]
fixed:
x3: ["a", "b", "c"]

View File

@@ -1,62 +0,0 @@
name: TerroristNetworkModel_sim
max_steps: 150
num_trials: 1
model_params:
network_params:
generator: random_geometric_graph
radius: 0.2
# generator: geographical_threshold_graph
# theta: 20
n: 100
network_agents:
- agent_class: TerroristNetworkModel.TerroristNetworkModel
weight: 0.8
state:
id: civilian # Civilians
- agent_class: TerroristNetworkModel.TerroristNetworkModel
weight: 0.1
state:
id: leader # Leaders
- agent_class: TerroristNetworkModel.TrainingAreaModel
weight: 0.05
state:
id: terrorist # Terrorism
- agent_class: TerroristNetworkModel.HavenModel
weight: 0.05
state:
id: civilian # Civilian
# TerroristSpreadModel
information_spread_intensity: 0.7
terrorist_additional_influence: 0.035
max_vulnerability: 0.7
prob_interaction: 0.5
# TrainingAreaModel and HavenModel
training_influence: 0.20
haven_influence: 0.20
# TerroristNetworkModel
vision_range: 0.30
sphere_influence: 2
weight_social_distance: 0.035
weight_link_distance: 0.035
visualization_params:
# Icons downloaded from https://www.iconfinder.com/
shape_property: agent
shapes:
TrainingAreaModel: target
HavenModel: home
TerroristNetworkModel: person
colors:
- attr_id: civilian
color: '#40de40'
- attr_id: terrorist
color: red
- attr_id: leader
color: '#c16a6a'
background_image: 'map_4800x2860.jpg'
background_opacity: '0.9'
background_filter_color: 'blue'
skip_test: true # This simulation takes too long for automated tests.

View File

@@ -1,8 +1,47 @@
import networkx as nx
from soil.agents import Geo, NetworkAgent, FSM, state, default_state
from soil import Environment
from soil.agents import Geo, NetworkAgent, FSM, custom, state, default_state
from soil import Environment, Simulation
from soil.parameters import *
from soil.utils import int_seed
class TerroristEnvironment(Environment):
n: Integer = 100
radius: Float = 0.2
information_spread_intensity: probability = 0.7
terrorist_additional_influence: probability = 0.03
terrorist_additional_influence: probability = 0.035
max_vulnerability: probability = 0.7
prob_interaction: probability = 0.5
# TrainingAreaModel and HavenModel
training_influence: probability = 0.20
haven_influence: probability = 0.20
# TerroristNetworkModel
vision_range: Float = 0.30
sphere_influence: Integer = 2
weight_social_distance: Float = 0.035
weight_link_distance: Float = 0.035
ratio_civil: probability = 0.8
ratio_leader: probability = 0.1
ratio_training: probability = 0.05
ratio_haven: probability = 0.05
def init(self):
self.create_network(generator=self.generator, n=self.n, radius=self.radius)
self.populate_network([
TerroristNetworkModel.w(state_id='civilian'),
TerroristNetworkModel.w(state_id='leader'),
TrainingAreaModel,
HavenModel
], [self.ratio_civil, self.ratio_leader, self.ratio_training, self.ratio_haven])
def generator(self, *args, **kwargs):
return nx.random_geometric_graph(*args, **kwargs, seed=int_seed(self._seed))
class TerroristSpreadModel(FSM, Geo):
"""
Settings:
@@ -13,47 +52,35 @@ class TerroristSpreadModel(FSM, Geo):
min_vulnerability (optional else zero)
max_vulnerability
prob_interaction
"""
def __init__(self, model=None, unique_id=0, state=()):
super().__init__(model=model, unique_id=unique_id, state=state)
information_spread_intensity = 0.1
terrorist_additional_influence = 0.1
min_vulnerability = 0
max_vulnerability = 1
self.information_spread_intensity = model.environment_params[
"information_spread_intensity"
]
self.terrorist_additional_influence = model.environment_params[
"terrorist_additional_influence"
]
self.prob_interaction = model.environment_params["prob_interaction"]
if self["id"] == self.civilian.id: # Civilian
self.mean_belief = self.random.uniform(0.00, 0.5)
elif self["id"] == self.terrorist.id: # Terrorist
def init(self):
if self.state_id == self.civilian.id: # Civilian
self.mean_belief = self.model.random.uniform(0.00, 0.5)
elif self.state_id == self.terrorist.id: # Terrorist
self.mean_belief = self.random.uniform(0.8, 1.00)
elif self["id"] == self.leader.id: # Leader
elif self.state_id == self.leader.id: # Leader
self.mean_belief = 1.00
else:
raise Exception("Invalid state id: {}".format(self["id"]))
if "min_vulnerability" in model.environment_params:
self.vulnerability = self.random.uniform(
model.environment_params["min_vulnerability"],
model.environment_params["max_vulnerability"],
)
else:
self.vulnerability = self.random.uniform(
0, model.environment_params["max_vulnerability"]
)
self.vulnerability = self.random.uniform(
self.get("min_vulnerability", 0), self.get("max_vulnerability", 1)
)
@default_state
@state
def civilian(self):
neighbours = list(self.get_neighbors(agent_class=TerroristSpreadModel))
if len(neighbours) > 0:
# Only interact with some of the neighbors
interactions = list(
n for n in neighbours if self.random.random() <= self.prob_interaction
n for n in neighbours if self.random.random() <= self.model.prob_interaction
)
influence = sum(self.degree(i) for i in interactions)
mean_belief = sum(
@@ -99,7 +126,7 @@ class TerroristSpreadModel(FSM, Geo):
)
# Check if there are any leaders in the group
leaders = list(filter(lambda x: x.state.id == self.leader.id, neighbours))
leaders = list(filter(lambda x: x.state_id == self.leader.id, neighbours))
if not leaders:
# Check if this is the potential leader
# Stop once it's found. Otherwise, set self as leader
@@ -110,12 +137,11 @@ class TerroristSpreadModel(FSM, Geo):
def ego_search(self, steps=1, center=False, agent=None, **kwargs):
"""Get a list of nodes in the ego network of *node* of radius *steps*"""
node = agent.node
node = agent.node_id if agent else self.node_id
G = self.subgraph(**kwargs)
return nx.ego_graph(G, node, center=center, radius=steps).nodes()
def degree(self, agent, force=False):
node = agent.node
if (
force
or (not hasattr(self.model, "_degree"))
@@ -123,10 +149,9 @@ class TerroristSpreadModel(FSM, Geo):
):
self.model._degree = nx.degree_centrality(self.G)
self.model._last_step = self.now
return self.model._degree[node]
return self.model._degree[agent.node_id]
def betweenness(self, agent, force=False):
node = agent.node
if (
force
or (not hasattr(self.model, "_betweenness"))
@@ -134,7 +159,7 @@ class TerroristSpreadModel(FSM, Geo):
):
self.model._betweenness = nx.betweenness_centrality(self.G)
self.model._last_step = self.now
return self.model._betweenness[node]
return self.model._betweenness[agent.node_id]
class TrainingAreaModel(FSM, Geo):
@@ -147,13 +172,12 @@ class TrainingAreaModel(FSM, Geo):
Requires TerroristSpreadModel.
"""
def __init__(self, model=None, unique_id=0, state=()):
super().__init__(model=model, unique_id=unique_id, state=state)
self.training_influence = model.environment_params["training_influence"]
if "min_vulnerability" in model.environment_params:
self.min_vulnerability = model.environment_params["min_vulnerability"]
else:
self.min_vulnerability = 0
training_influence = 0.1
min_vulnerability = 0
def init(self):
self.mean_believe = 1
self.vulnerability = 0
@default_state
@state
@@ -177,18 +201,19 @@ class HavenModel(FSM, Geo):
Requires TerroristSpreadModel.
"""
def __init__(self, model=None, unique_id=0, state=()):
super().__init__(model=model, unique_id=unique_id, state=state)
self.haven_influence = model.environment_params["haven_influence"]
if "min_vulnerability" in model.environment_params:
self.min_vulnerability = model.environment_params["min_vulnerability"]
else:
self.min_vulnerability = 0
self.max_vulnerability = model.environment_params["max_vulnerability"]
min_vulnerability = 0
haven_influence = 0.1
max_vulnerability = 0.5
def init(self):
self.mean_believe = 0
self.vulnerability = 0
def get_occupants(self, **kwargs):
return self.get_neighbors(agent_class=TerroristSpreadModel, **kwargs)
return self.get_neighbors(agent_class=TerroristSpreadModel,
**kwargs)
@default_state
@state
def civilian(self):
civilians = self.get_occupants(state_id=self.civilian.id)
@@ -224,13 +249,10 @@ class TerroristNetworkModel(TerroristSpreadModel):
weight_link_distance
"""
def __init__(self, model=None, unique_id=0, state=()):
super().__init__(model=model, unique_id=unique_id, state=state)
self.vision_range = model.environment_params["vision_range"]
self.sphere_influence = model.environment_params["sphere_influence"]
self.weight_social_distance = model.environment_params["weight_social_distance"]
self.weight_link_distance = model.environment_params["weight_link_distance"]
sphere_influence: float = 1
vision_range: float = 1
weight_social_distance: float = 0.5
weight_link_distance: float = 0.2
@state
def terrorist(self):
@@ -257,26 +279,26 @@ class TerroristNetworkModel(TerroristSpreadModel):
)
)
neighbours = set(
agent.id
agent.unique_id
for agent in self.get_neighbors(agent_class=TerroristNetworkModel)
)
search = (close_ups | step_neighbours) - neighbours
for agent in self.get_agents(search):
social_distance = 1 / self.shortest_path_length(agent.id)
spatial_proximity = 1 - self.get_distance(agent.id)
social_distance = 1 / self.shortest_path_length(agent.unique_id)
spatial_proximity = 1 - self.get_distance(agent.unique_id)
prob_new_interaction = (
self.weight_social_distance * social_distance
+ self.weight_link_distance * spatial_proximity
)
if (
agent["id"] == agent.civilian.id
agent.state_id == "civilian"
and self.random.random() < prob_new_interaction
):
self.add_edge(agent)
break
def get_distance(self, target):
source_x, source_y = nx.get_node_attributes(self.G, "pos")[self.id]
source_x, source_y = nx.get_node_attributes(self.G, "pos")[self.unique_id]
target_x, target_y = nx.get_node_attributes(self.G, "pos")[target]
dx = abs(source_x - target_x)
dy = abs(source_y - target_y)
@@ -284,6 +306,36 @@ class TerroristNetworkModel(TerroristSpreadModel):
def shortest_path_length(self, target):
try:
return nx.shortest_path_length(self.G, self.id, target)
return nx.shortest_path_length(self.G, self.unique_id, target)
except nx.NetworkXNoPath:
return float("inf")
sim = Simulation(
model=TerroristEnvironment,
iterations=1,
name="TerroristNetworkModel_sim",
max_steps=150,
seed="default2",
skip_test=False,
dump=False,
)
# TODO: integrate visualization
# visualization_params:
# # Icons downloaded from https://www.iconfinder.com/
# shape_property: agent
# shapes:
# TrainingAreaModel: target
# HavenModel: home
# TerroristNetworkModel: person
# colors:
# - attr_id: civilian
# color: '#40de40'
# - attr_id: terrorist
# color: red
# - attr_id: leader
# color: '#c16a6a'
# background_image: 'map_4800x2860.jpg'
# background_opacity: '0.9'
# background_filter_color: 'blue'

View File

@@ -1,15 +0,0 @@
---
name: torvalds_example
max_steps: 10
interval: 2
model_params:
agent_class: CounterModel
default_state:
skill_level: 'beginner'
network_params:
path: 'torvalds.edgelist'
states:
Torvalds:
skill_level: 'God'
balkian:
skill_level: 'developer'

25
examples/torvalds_sim.py Normal file
View File

@@ -0,0 +1,25 @@
from soil import Environment, Simulation, CounterModel, report
# Get directory path for current file
import os, sys, inspect
currentdir = os.path.dirname(os.path.abspath(inspect.getfile(inspect.currentframe())))
class TorvaldsEnv(Environment):
def init(self):
self.create_network(path=os.path.join(currentdir, 'torvalds.edgelist'))
self.populate_network(CounterModel, skill_level='beginner')
self.agent(node_id="Torvalds").skill_level = 'God'
self.agent(node_id="balkian").skill_level = 'developer'
self.add_agent_reporter("times")
@report
def god_developers(self):
return self.count_agents(skill_level='God')
sim = Simulation(name='torvalds_example',
max_steps=10,
interval=2,
model=TorvaldsEnv)

File diff suppressed because one or more lines are too long

View File

@@ -5,6 +5,9 @@ pyyaml>=5.1
pandas>=1
SALib>=1.3
Jinja2
Mesa>=1.1
Mesa>=1.2
pydantic>=1.9
sqlalchemy>=1.4
typing-extensions>=4.4
annotated-types>=0.4
tqdm>=4.64

View File

@@ -1,3 +1,7 @@
[metadata]
long_description = file: README.md
long_description_content_type = text/markdown
[aliases]
test=pytest
[tool:pytest]

View File

@@ -44,13 +44,18 @@ setup(
'Operating System :: MacOS :: MacOS X',
'Operating System :: Microsoft :: Windows',
'Operating System :: POSIX',
'Programming Language :: Python :: 3'],
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
],
install_requires=install_reqs,
extras_require=extras_require,
tests_require=test_reqs,
setup_requires=['pytest-runner', ],
pytest_plugins = ['pytest_profiling'],
include_package_data=True,
python_requires=">=3.8",
entry_points={
'console_scripts':
['soil = soil.__main__:main',

View File

@@ -1 +1 @@
0.30.0rc4
1.0.0rc1

View File

@@ -16,6 +16,7 @@ except NameError:
basestring = str
from pathlib import Path
from .analysis import *
from .agents import *
from . import agents
from .simulation import *
@@ -24,15 +25,15 @@ from .datacollection import SoilCollector
from . import serialization
from .utils import logger
from .time import *
from .decorators import *
def main(
cfg="simulation.yml",
exporters=None,
parallel=None,
num_processes=1,
output="soil_output",
*,
do_run=False,
debug=False,
pdb=False,
**kwargs,
@@ -68,6 +69,11 @@ def main(
"--dry-run",
"--dry",
action="store_true",
help="Do not run the simulation",
)
parser.add_argument(
"--no-dump",
action="store_true",
help="Do not store the results of the simulation to disk, show in terminal instead.",
)
parser.add_argument(
@@ -82,7 +88,7 @@ def main(
"--graph",
"-g",
action="store_true",
help="Dump each trial's network topology as a GEXF graph. Defaults to false.",
help="Dump each iteration's network topology as a GEXF graph. Defaults to false.",
)
parser.add_argument(
"--csv",
@@ -97,12 +103,11 @@ def main(
default=output or "soil_output",
help="folder to write results to. It defaults to the current directory.",
)
if parallel is None:
parser.add_argument(
"--synchronous",
action="store_true",
help="Run trials serially and synchronously instead of in parallel. Defaults to false.",
)
parser.add_argument(
"--num-processes",
default=num_processes,
help="Number of processes to use for parallel execution. Defaults to 1.",
)
parser.add_argument(
"-e",
@@ -111,6 +116,29 @@ def main(
default=[],
help="Export environment and/or simulations using this exporter",
)
parser.add_argument(
"--max_time",
default="-1",
help="Set maximum time for the simulation to run. ",
)
parser.add_argument(
"--max_steps",
default="-1",
help="Set maximum number of steps for the simulation to run.",
)
parser.add_argument(
"--iterations",
default="",
help="Set maximum number of iterations (runs) for the simulation.",
)
parser.add_argument(
"--seed",
default=None,
help="Manually set a seed for the simulation.",
)
parser.add_argument(
"--only-convert",
@@ -132,14 +160,12 @@ def main(
)
args = parser.parse_args()
logger.setLevel(getattr(logging, (args.level or "INFO").upper()))
level = getattr(logging, (args.level or "INFO").upper())
logger.setLevel(level)
if args.version:
return
if parallel is None:
parallel = not args.synchronous
exporters = exporters or [
"default",
]
@@ -167,42 +193,49 @@ def main(
res = []
try:
exp_params = {}
opts = dict(
dry_run=args.dry_run,
dump=not args.no_dump,
debug=debug,
exporters=exporters,
num_processes=args.num_processes,
level=level,
outdir=output,
exporter_params=exp_params,
**kwargs)
if args.seed is not None:
opts["seed"] = args.seed
if args.iterations:
opts["iterations"] =int(args.iterations)
if sim:
logger.info("Loading simulation instance")
sim.dry_run = args.dry_run
sim.exporters = exporters
sim.parallel = parallel
sim.outdir = output
sims = [
sim,
]
for (k, v) in opts.items():
setattr(sim, k, v)
sims = [sim]
else:
logger.info("Loading config file: {}".format(args.file))
if not os.path.exists(args.file):
logger.error("Please, input a valid file")
return
assert opts["debug"] == debug
sims = list(
simulation.iter_from_config(
simulation.iter_from_file(
args.file,
dry_run=args.dry_run,
exporters=exporters,
parallel=parallel,
outdir=output,
exporter_params=exp_params,
**kwargs,
**opts,
)
)
for sim in sims:
assert sim.debug == debug
if args.set:
for s in args.set:
k, v = s.split("=", 1)[:2]
v = eval(v)
tail, *head = k.rsplit(".", 1)[::-1]
target = sim
target = sim.parameters
if head:
for part in head[0].split("."):
try:
@@ -217,11 +250,9 @@ def main(
if args.only_convert:
print(sim.to_yaml())
continue
if do_run:
res.append(sim.run())
else:
print("not running")
res.append(sim)
max_time = float(args.max_time) if args.max_time != "-1" else None
max_steps = float(args.max_steps) if args.max_steps != "-1" else None
res.append(sim.run(max_time=max_time, max_steps=max_steps))
except Exception as ex:
if args.pdb:
@@ -242,7 +273,7 @@ def main(
@contextmanager
def easy(cfg, pdb=False, debug=False, **kwargs):
try:
yield main(cfg, debug=debug, pdb=pdb, **kwargs)[0]
return main(cfg, debug=debug, pdb=pdb, **kwargs)[0]
except Exception as e:
if os.environ.get("SOIL_POSTMORTEM"):
from .debugging import post_mortem
@@ -253,4 +284,4 @@ def easy(cfg, pdb=False, debug=False, **kwargs):
if __name__ == "__main__":
main(do_run=True)
main()

View File

@@ -2,8 +2,8 @@ from . import main as init_main
def main():
init_main(do_run=True)
init_main()
if __name__ == "__main__":
init_main(do_run=True)
init_main()

View File

@@ -1,6 +1,12 @@
from . import NetworkAgent
from . import BaseAgent, NetworkAgent
class Ticker(BaseAgent):
times = 0
def step(self):
self.times += 1
class CounterModel(NetworkAgent):
"""
Dummy behaviour. It counts the number of nodes in the network and neighbors

View File

@@ -6,9 +6,9 @@ from . import NetworkAgent
class Geo(NetworkAgent):
"""In this type of network, nodes have a "pos" attribute."""
def geo_search(self, radius, agent=None, center=False, **kwargs):
def geo_search(self, radius, center=False, **kwargs):
"""Get a list of nodes whose coordinates are closer than *radius* to *node*."""
node = agent.node
node = self.node_id
G = self.subgraph(**kwargs)

View File

@@ -11,13 +11,15 @@ import inspect
import types
import textwrap
import networkx as nx
import warnings
import sys
from typing import Any
from mesa import Agent as MesaAgent
from mesa import Agent as MesaAgent, Model
from typing import Dict, List
from .. import serialization, utils, time, config
from .. import serialization, network, utils, time, config
IGNORED_FIELDS = ("model", "logger")
@@ -90,7 +92,7 @@ class BaseAgent(MesaAgent, MutableMapping, metaclass=MetaAgent):
Any attribute that is not preceded by an underscore (`_`) will also be added to its state.
"""
def __init__(self, unique_id, model, name=None, interval=None, **kwargs):
def __init__(self, unique_id, model, name=None, init=True, interval=None, **kwargs):
assert isinstance(unique_id, int)
super().__init__(unique_id=unique_id, model=model)
@@ -116,6 +118,11 @@ class BaseAgent(MesaAgent, MutableMapping, metaclass=MetaAgent):
for (k, v) in kwargs.items():
setattr(self, k, v)
if init:
self.init()
def init(self):
pass
def __hash__(self):
return hash(self.unique_id)
@@ -123,9 +130,16 @@ class BaseAgent(MesaAgent, MutableMapping, metaclass=MetaAgent):
def prob(self, probability):
return prob(probability, self.model.random)
@classmethod
def w(cls, **kwargs):
return custom(cls, **kwargs)
# TODO: refactor to clean up mesa compatibility
@property
def id(self):
msg = "This attribute is deprecated. Use `unique_id` instead"
warnings.warn(msg, DeprecationWarning)
print(msg, file=sys.stderr)
return self.unique_id
@classmethod
@@ -175,7 +189,11 @@ class BaseAgent(MesaAgent, MutableMapping, metaclass=MetaAgent):
return it
def get(self, key, default=None):
return self[key] if key in self else default
if key in self:
return self[key]
elif key in self.model:
return self.model[key]
return default
@property
def now(self):
@@ -185,8 +203,10 @@ class BaseAgent(MesaAgent, MutableMapping, metaclass=MetaAgent):
# No environment
return None
def die(self):
self.info(f"agent dying")
def die(self, msg=None):
if msg:
self.info("Agent dying:", msg)
self.debug(f"agent dying")
self.alive = False
try:
self.model.schedule.remove(self)
@@ -195,15 +215,16 @@ class BaseAgent(MesaAgent, MutableMapping, metaclass=MetaAgent):
return time.NEVER
def step(self):
raise NotImplementedError("Agent must implement step method")
def _check_alive(self):
if not self.alive:
raise time.DeadAgent(self.unique_id)
super().step()
return time.Delta(self.interval)
def log(self, message, *args, level=logging.INFO, **kwargs):
def log(self, *message, level=logging.INFO, **kwargs):
if not self.logger.isEnabledFor(level):
return
message = message + " ".join(str(i) for i in args)
message = " ".join(str(i) for i in message)
message = "[@{:>4}]\t{:>10}: {}".format(self.now, repr(self), message)
for k, v in kwargs:
message += " {k}={v} ".format(k, v)
@@ -376,7 +397,7 @@ class AgentView(Mapping, Set):
def filter_agents(
agents,
agents: dict,
*id_args,
unique_id=None,
state_id=None,
@@ -621,12 +642,16 @@ def _from_distro(
from .network_agents import *
from .fsm import *
from .evented import *
from typing import Optional
class Agent(NetworkAgent, FSM, EventedAgent):
"""Default agent class, has both network and event capabilities"""
from ..environment import NetworkEnvironment
from .BassModel import *
from .IndependentCascadeModel import *
from .SISaModel import *
@@ -640,3 +665,8 @@ except ImportError:
import sys
print("Could not load the Geo Agent, scipy is not installed", file=sys.stderr)
def custom(cls, **kwargs):
"""Create a new class from a template class and keyword arguments"""
return type(cls.__name__, (cls,), kwargs)

View File

@@ -1,10 +1,11 @@
from . import MetaAgent, BaseAgent
from ..time import Delta
from functools import partial, wraps
import inspect
def state(name=None):
def state(name=None, default=False):
def decorator(func, name=None):
"""
A state function should return either a state id, or a tuple (state_id, when)
@@ -39,7 +40,7 @@ def state(name=None):
self._last_except = None
func.id = name or func.__name__
func.is_default = False
func.is_default = default
return func
if callable(name):
@@ -85,8 +86,8 @@ class MetaFSM(MetaAgent):
class FSM(BaseAgent, metaclass=MetaFSM):
def __init__(self, **kwargs):
super(FSM, self).__init__(**kwargs)
def __init__(self, init=True, **kwargs):
super().__init__(**kwargs, init=False)
if not hasattr(self, "state_id"):
if not self._default_state:
raise ValueError(
@@ -95,12 +96,19 @@ class FSM(BaseAgent, metaclass=MetaFSM):
self.state_id = self._default_state.id
self._coroutine = None
self.default_interval = Delta(self.model.interval)
self._set_state(self.state_id)
if init:
self.init()
@classmethod
def states(cls):
return list(cls._states.keys())
def step(self):
self.debug(f"Agent {self.unique_id} @ state {self.state_id}")
default_interval = super().step()
self._check_alive()
next_state = self._states[self.state_id](self)
when = None
@@ -120,7 +128,7 @@ class FSM(BaseAgent, metaclass=MetaFSM):
if next_state is not None:
self._set_state(next_state)
return when or default_interval
return when or self.default_interval
def _set_state(self, state, when=None):
if hasattr(state, "id"):
@@ -132,8 +140,8 @@ class FSM(BaseAgent, metaclass=MetaFSM):
self.model.schedule.add(self, when=when)
return state
def die(self):
return self.dead, super().die()
def die(self, *args, **kwargs):
return self.dead, super().die(*args, **kwargs)
@state
def dead(self):

View File

@@ -2,23 +2,37 @@ from . import BaseAgent
class NetworkAgent(BaseAgent):
def __init__(self, *args, topology, node_id, **kwargs):
super().__init__(*args, **kwargs)
def __init__(self, *args, topology=None, init=True, node_id=None, **kwargs):
super().__init__(*args, init=False, **kwargs)
assert topology is not None
assert node_id is not None
self.G = topology
self.G = topology or self.model.G
assert self.G
if node_id is None:
nodes = self.random.choices(list(self.G.nodes), k=len(self.G))
for n_id in nodes:
if "agent" not in self.G.nodes[n_id] or self.G.nodes[n_id]["agent"] is None:
node_id = n_id
break
else:
node_id = len(self.G)
self.info(f"All nodes ({len(self.G)}) have an agent assigned, adding a new node to the graph for agent {self.unique_id}")
self.G.add_node(node_id)
assert node_id is not None
self.G.nodes[node_id]["agent"] = self
self.node_id = node_id
if init:
self.init()
def count_neighbors(self, state_id=None, **kwargs):
return len(self.get_neighbors(state_id=state_id, **kwargs))
if init:
self.init()
def iter_neighbors(self, **kwargs):
return self.iter_agents(limit_neighbors=True, **kwargs)
def get_neighbors(self, **kwargs):
return list(self.iter_neighbors())
return list(self.iter_neighbors(**kwargs))
@property
def node(self):
@@ -26,20 +40,18 @@ class NetworkAgent(BaseAgent):
def iter_agents(self, unique_id=None, *, limit_neighbors=False, **kwargs):
unique_ids = None
if isinstance(unique_id, list):
unique_ids = set(unique_id)
elif unique_id is not None:
unique_ids = set(
[
unique_id,
]
)
if unique_ids is not None:
try:
unique_ids = set(unique_id)
except TypeError:
unique_ids = set([unique_id])
if limit_neighbors:
neighbor_ids = set()
for node_id in self.G.neighbors(self.node_id):
if self.G.nodes[node_id].get("agent") is not None:
neighbor_ids.add(node_id)
agent = self.G.nodes[node_id].get("agent")
if agent is not None:
neighbor_ids.add(agent.unique_id)
if unique_ids:
unique_ids = unique_ids & neighbor_ids
else:

49
soil/analysis.py Normal file
View File

@@ -0,0 +1,49 @@
import os
import sqlalchemy
import pandas as pd
from collections import namedtuple
def plot(env, agent_df=None, model_df=None, steps=False, ignore=["agent_count", ]):
"""Plot the model dataframe and agent dataframe together."""
if agent_df is None:
agent_df = env.agent_df()
if model_df is None:
model_df = env.model_df()
ignore = list(ignore)
if not steps:
ignore.append("step")
else:
ignore.append("time")
ax = model_df.drop(ignore, axis='columns').plot();
if not agent_df.empty:
agent_df.unstack().apply(lambda x: x.value_counts(),
axis=1).fillna(0).plot(ax=ax, secondary_y=True);
Results = namedtuple("Results", ["config", "parameters", "env", "agents"])
#TODO implement reading from CSV and SQLITE
def read_sql(fpath=None, name=None, include_agents=False):
if not (fpath is None) ^ (name is None):
raise ValueError("Specify either a path or a simulation name")
if name:
fpath = os.path.join("soil_output", name, f"{name}.sqlite")
fpath = os.path.abspath(fpath)
# TODO: improve url parsing. This is a hacky way to check we weren't given a URL
if "://" not in fpath:
fpath = f"sqlite:///{fpath}"
engine = sqlalchemy.create_engine(fpath)
with engine.connect() as conn:
env = pd.read_sql_table("env", con=conn,
index_col="step").reset_index().set_index([
"simulation_id", "params_id",
"iteration_id", "step"
])
agents = pd.read_sql_table("agents", con=conn, index_col=["simulation_id", "params_id", "iteration_id", "step", "agent_id"])
config = pd.read_sql_table("configuration", con=conn, index_col="simulation_id")
parameters = pd.read_sql_table("parameters", con=conn, index_col=["iteration_id", "params_id", "simulation_id"])
try:
parameters = parameters.pivot(columns="key", values="value")
except Exception as e:
print(f"warning: coult not pivot parameters: {e}")
return Results(config, parameters, env, agents)

View File

@@ -1,267 +1,2 @@
from __future__ import annotations
from enum import Enum
from pydantic import BaseModel, ValidationError, validator, root_validator
import yaml
import os
import sys
from typing import Any, Callable, Dict, List, Optional, Union, Type
from pydantic import BaseModel, Extra
from . import environment, utils
import networkx as nx
# Could use TypeAlias in python >= 3.10
nodeId = int
class Node(BaseModel):
id: nodeId
state: Optional[Dict[str, Any]] = {}
class Edge(BaseModel):
source: nodeId
target: nodeId
value: Optional[float] = 1
class Topology(BaseModel):
nodes: List[Node]
directed: bool
links: List[Edge]
class NetConfig(BaseModel):
params: Optional[Dict[str, Any]]
fixed: Optional[Union[Topology, nx.Graph]]
path: Optional[str]
class Config:
arbitrary_types_allowed = True
@staticmethod
def default():
return NetConfig(topology=None, params=None)
@root_validator
def validate_all(cls, values):
if "params" not in values and "topology" not in values:
raise ValueError(
"You must specify either a topology or the parameters to generate a graph"
)
return values
class EnvConfig(BaseModel):
@staticmethod
def default():
return EnvConfig()
class SingleAgentConfig(BaseModel):
agent_class: Optional[Union[Type, str]] = None
unique_id: Optional[int] = None
topology: Optional[bool] = False
node_id: Optional[Union[int, str]] = None
state: Optional[Dict[str, Any]] = {}
class FixedAgentConfig(SingleAgentConfig):
n: Optional[int] = 1
hidden: Optional[bool] = False # Do not count this agent towards total agent count
@root_validator
def validate_all(cls, values):
if values.get("unique_id", None) is not None and values.get("n", 1) > 1:
raise ValueError(
f"An unique_id can only be provided when there is only one agent ({values.get('n')} given)"
)
return values
class OverrideAgentConfig(FixedAgentConfig):
filter: Optional[Dict[str, Any]] = None
class Strategy(Enum):
topology = "topology"
total = "total"
class AgentDistro(SingleAgentConfig):
weight: Optional[float] = 1
strategy: Strategy = Strategy.topology
class AgentConfig(SingleAgentConfig):
n: Optional[int] = None
distribution: Optional[List[AgentDistro]] = None
fixed: Optional[List[FixedAgentConfig]] = None
override: Optional[List[OverrideAgentConfig]] = None
@staticmethod
def default():
return AgentConfig()
@root_validator
def validate_all(cls, values):
if "distribution" in values and (
"n" not in values and "topology" not in values
):
raise ValueError(
"You need to provide the number of agents or a topology to extract the value from."
)
return values
class Config(BaseModel, extra=Extra.allow):
version: Optional[str] = "1"
name: str = "Unnamed Simulation"
description: Optional[str] = None
group: str = None
dir_path: Optional[str] = None
num_trials: int = 1
max_time: float = 100
max_steps: int = -1
num_processes: int = 1
interval: float = 1
seed: str = ""
dry_run: bool = False
skip_test: bool = False
model_class: Union[Type, str] = environment.Environment
model_params: Optional[Dict[str, Any]] = {}
visualization_params: Optional[Dict[str, Any]] = {}
@classmethod
def from_raw(cls, cfg):
if isinstance(cfg, Config):
return cfg
if cfg.get("version", "1") == "1" and any(
k in cfg for k in ["agents", "agent_class", "topology", "environment_class"]
):
return convert_old(cfg)
return Config(**cfg)
def convert_old(old, strict=True):
"""
Try to convert old style configs into the new format.
This is still a work in progress and might not work in many cases.
"""
utils.logger.warning(
"The old configuration format is deprecated. The converted file MAY NOT yield the right results"
)
new = old.copy()
network = {}
if "topology" in old:
del new["topology"]
network["topology"] = old["topology"]
if "network_params" in old and old["network_params"]:
del new["network_params"]
for (k, v) in old["network_params"].items():
if k == "path":
network["path"] = v
else:
network.setdefault("params", {})[k] = v
topology = None
if network:
topology = network
agents = {"fixed": [], "distribution": []}
def updated_agent(agent):
"""Convert an agent definition"""
newagent = dict(agent)
return newagent
by_weight = []
fixed = []
override = []
if "environment_agents" in new:
for agent in new["environment_agents"]:
agent.setdefault("state", {})["group"] = "environment"
if "agent_id" in agent:
agent["state"]["name"] = agent["agent_id"]
del agent["agent_id"]
agent["hidden"] = True
agent["topology"] = False
fixed.append(updated_agent(agent))
del new["environment_agents"]
if "agent_class" in old:
del new["agent_class"]
agents["agent_class"] = old["agent_class"]
if "default_state" in old:
del new["default_state"]
agents["state"] = old["default_state"]
if "network_agents" in old:
agents["topology"] = True
agents.setdefault("state", {})["group"] = "network"
for agent in new["network_agents"]:
agent = updated_agent(agent)
if "agent_id" in agent:
agent["state"]["name"] = agent["agent_id"]
del agent["agent_id"]
fixed.append(agent)
else:
by_weight.append(agent)
del new["network_agents"]
if "agent_class" in old and (not fixed and not by_weight):
agents["topology"] = True
by_weight = [{"agent_class": old["agent_class"], "weight": 1}]
# TODO: translate states properly
if "states" in old:
del new["states"]
states = old["states"]
if isinstance(states, dict):
states = states.items()
else:
states = enumerate(states)
for (k, v) in states:
override.append({"filter": {"node_id": k}, "state": v})
agents["override"] = override
agents["fixed"] = fixed
agents["distribution"] = by_weight
model_params = {}
if "environment_params" in new:
del new["environment_params"]
model_params = dict(old["environment_params"])
if "environment_class" in old:
del new["environment_class"]
new["model_class"] = old["environment_class"]
if "dump" in old:
del new["dump"]
new["dry_run"] = not old["dump"]
model_params["topology"] = topology
model_params["agents"] = agents
return Config(version="2", model_params=model_params, **new)
def load_config(cfg):
return cfg

View File

@@ -8,8 +8,10 @@ class SoilCollector(MDC):
tables = tables or {}
if 'agent_count' not in model_reporters:
model_reporters['agent_count'] = lambda m: m.schedule.get_agent_count()
if 'state_id' not in agent_reporters:
agent_reporters['agent_id'] = lambda agent: agent.get('state_id', None)
if 'time' not in model_reporters:
model_reporters['time'] = lambda m: m.now
# if 'state_id' not in agent_reporters:
# agent_reporters['state_id'] = lambda agent: getattr(agent, 'state_id', None)
super().__init__(model_reporters=model_reporters,
agent_reporters=agent_reporters,

View File

@@ -8,6 +8,7 @@ from textwrap import indent
from functools import wraps
from .agents import FSM, MetaFSM
from mesa import Model, Agent
def wrapcmd(func):
@@ -15,14 +16,22 @@ def wrapcmd(func):
def wrapper(self, arg: str, temporary=False):
sys.settrace(self.trace_dispatch)
lastself = self
known = globals()
known.update(self.curframe.f_globals)
known.update(self.curframe.f_locals)
known["agent"] = known.get("self", None)
known["model"] = known.get("self", {}).get("model")
known["attrs"] = arg.strip().split()
exec(func.__code__, known, known)
this = known.get("self", None)
if isinstance(this, Model):
known["model"] = this
elif isinstance(this, Agent):
known["agent"] = this
known["model"] = this.model
known["self"] = lastself
return exec(func.__code__, known, known)
return wrapper
@@ -57,6 +66,7 @@ class Debug(pdb.Pdb):
do_sl = do_soil_list
def do_continue_state(self, arg):
"""Continue until next time this state is reached"""
self.do_break_state(arg, temporary=True)
return self.do_continue("")
@@ -80,6 +90,49 @@ class Debug(pdb.Pdb):
do_aa = do_soil_agent
def do_break_step(self, arg: str):
"""
Break before the next step.
"""
try:
known = globals()
known.update(self.curframe.f_globals)
known.update(self.curframe.f_locals)
func = getattr(known["model"], "step")
except AttributeError as ex:
self.error(f"The model does not have a step function: {ex}")
return
if hasattr(func, "__func__"):
func = func.__func__
code = func.__code__
# use co_name to identify the bkpt (function names
# could be aliased, but co_name is invariant)
funcname = code.co_name
lineno = code.co_firstlineno
filename = code.co_filename
# Check for reasonable breakpoint
line = self.checkline(filename, lineno)
if not line:
raise ValueError("no line found")
# now set the break point
existing = self.get_breaks(filename, line)
if existing:
self.message("Breakpoint already exists at %s:%d" % (filename, line))
return
cond = f"self.schedule.steps > {model.schedule.steps}"
err = self.set_break(filename, line, True, cond, funcname)
if err:
self.error(err)
else:
bp = self.get_breaks(filename, line)[-1]
self.message("Breakpoint %d at %s:%d" % (bp.number, bp.file, bp.line))
return self.do_continue("")
do_bstep = do_break_step
def do_break_state(self, arg: str, instances=None, temporary=False):
"""
Break before a specified state is stepped into.

6
soil/decorators.py Normal file
View File

@@ -0,0 +1,6 @@
def report(f: property):
if isinstance(f, property):
setattr(f.fget, "add_to_report", True)
else:
setattr(f, "add_to_report", True)
return f

View File

@@ -6,20 +6,21 @@ import math
import logging
import inspect
from typing import Any, Dict, Optional, Union, List
from typing import Any, Callable, Dict, Optional, Union, List, Type
from collections import namedtuple
from time import time as current_time
from copy import deepcopy
from networkx.readwrite import json_graph
import networkx as nx
from mesa import Model
from mesa import Model, Agent
from . import agents as agentmod, config, datacollection, serialization, utils, time, network, events
from . import agents as agentmod, datacollection, serialization, utils, time, network, events
# TODO: maybe add metaclass to read attributes of a model
class BaseEnvironment(Model):
"""
The environment is key in a simulation. It controls how agents interact,
@@ -33,109 +34,111 @@ class BaseEnvironment(Model):
:meth:`soil.environment.Environment.get` method.
"""
def __init__(
self,
id="unnamed_env",
seed="default",
schedule_class=time.TimedActivation,
dir_path=None,
interval=1,
agent_class=None,
agents: List[tuple[type, Dict[str, Any]]] = {},
collector_class: type = datacollection.SoilCollector,
agent_reporters: Optional[Any] = None,
model_reporters: Optional[Any] = None,
tables: Optional[Any] = None,
**env_params,
):
super().__init__(seed=seed)
self.current_id = -1
self.id = id
collector_class = datacollection.SoilCollector
def __new__(cls,
*args: Any,
seed="default",
dir_path=None,
collector_class: type = None,
agent_reporters: Optional[Any] = None,
model_reporters: Optional[Any] = None,
tables: Optional[Any] = None,
**kwargs: Any) -> Any:
"""Create a new model with a default seed value"""
self = super().__new__(cls, *args, seed=seed, **kwargs)
self.dir_path = dir_path or os.getcwd()
if schedule_class is None:
schedule_class = time.TimedActivation
else:
schedule_class = serialization.deserialize(schedule_class)
self.schedule = schedule_class(self)
self.agent_class = agent_class or agentmod.BaseAgent
self.interval = interval
self.init_agents(agents)
self.logger = utils.logger.getChild(self.id)
collector_class = collector_class or cls.collector_class
collector_class = serialization.deserialize(collector_class)
self.datacollector = collector_class(
model_reporters=model_reporters,
agent_reporters=agent_reporters,
tables=tables,
)
for k in dir(cls):
v = getattr(cls, k)
if isinstance(v, property):
v = v.fget
if getattr(v, "add_to_report", False):
self.add_model_reporter(k, v)
return self
def __init__(
self,
*,
id="unnamed_env",
seed="default",
dir_path=None,
schedule_class=time.TimedActivation,
interval=1,
logger = None,
agents: Optional[Dict] = None,
collector_class: type = datacollection.SoilCollector,
agent_reporters: Optional[Any] = None,
model_reporters: Optional[Any] = None,
tables: Optional[Any] = None,
init: bool = True,
**env_params,
):
super().__init__()
self.current_id = -1
self.id = id
if logger:
self.logger = logger
else:
self.logger = utils.logger.getChild(self.id)
if schedule_class is None:
schedule_class = time.TimedActivation
else:
schedule_class = serialization.deserialize(schedule_class)
self.interval = interval
self.schedule = schedule_class(self)
for (k, v) in env_params.items():
self[k] = v
def _agent_from_dict(self, agent):
"""
Translate an agent dictionary into an agent
"""
agent = dict(**agent)
cls = agent.pop("agent_class", None) or self.agent_class
unique_id = agent.pop("unique_id", None)
if unique_id is None:
unique_id = self.next_id()
if agents:
self.add_agents(**agents)
if init:
self.init()
self.datacollector.collect(self)
return serialization.deserialize(cls)(unique_id=unique_id, model=self, **agent)
def init_agents(self, agents: Union[config.AgentConfig, List[Dict[str, Any]]] = {}):
"""
Initialize the agents in the model from either a `soil.config.AgentConfig` or a list of
dictionaries that each describes an agent.
If given a list of dictionaries, an agent will be created for each dictionary. The agent
class can be specified through the `agent_class` key. The rest of the items will be used
as parameters to the agent.
"""
if not agents:
return
lst = agents
override = []
if not isinstance(lst, list):
if not isinstance(agents, config.AgentConfig):
lst = config.AgentConfig(**agents)
if lst.override:
override = lst.override
lst = self._agent_dict_from_config(lst)
# TODO: check override is working again. It cannot (easily) be part of agents.from_config anymore,
# because it needs attribute such as unique_id, which are only present after init
new_agents = [self._agent_from_dict(agent) for agent in lst]
for a in new_agents:
self.schedule.add(a)
for rule in override:
for agent in agentmod.filter_agents(self.schedule._agents, **rule.filter):
for attr, value in rule.state.items():
setattr(agent, attr, value)
def _agent_dict_from_config(self, cfg):
return agentmod.from_config(cfg, random=self.random)
def init(self):
pass
@property
def agents(self):
return agentmod.AgentView(self.schedule._agents)
def find_one(self, *args, **kwargs):
def agent(self, *args, **kwargs):
return agentmod.AgentView(self.schedule._agents).one(*args, **kwargs)
def count_agents(self, *args, **kwargs):
return sum(1 for i in self.agents(*args, **kwargs))
def agent_df(self, steps=False):
df = self.datacollector.get_agent_vars_dataframe()
if steps:
df.index.rename(["step", "agent_id"], inplace=True)
return df
model_df = self.datacollector.get_model_vars_dataframe()
df.index = df.index.set_levels(model_df.time, level=0).rename(["time", "agent_id"])
return df
def model_df(self, steps=False):
df = self.datacollector.get_model_vars_dataframe()
if steps:
return df
df.index.rename("step", inplace=True)
return df.reset_index().set_index("time")
@property
def now(self):
@@ -144,17 +147,34 @@ class BaseEnvironment(Model):
raise Exception(
"The environment has not been scheduled, so it has no sense of time"
)
def init_agents(self):
pass
def add_agent(self, unique_id=None, **kwargs):
def add_agent(self, agent_class, unique_id=None, **agent):
if unique_id is None:
unique_id = self.next_id()
kwargs["unique_id"] = unique_id
a = self._agent_from_dict(kwargs)
agent["unique_id"] = unique_id
agent = dict(**agent)
unique_id = agent.pop("unique_id", None)
if unique_id is None:
unique_id = self.next_id()
a = serialization.deserialize(agent_class)(unique_id=unique_id, model=self, **agent)
self.schedule.add(a)
return a
def add_agents(self, agent_classes: List[type], k, weights: Optional[List[float]] = None, **kwargs):
if isinstance(agent_classes, type):
agent_classes = [agent_classes]
if weights is None:
weights = [1] * len(agent_classes)
for cls in self.random.choices(agent_classes, weights=weights, k=k):
self.add_agent(agent_class=cls, **kwargs)
def log(self, message, *args, level=logging.INFO, **kwargs):
if not self.logger.isEnabledFor(level):
return
@@ -172,12 +192,37 @@ class BaseEnvironment(Model):
Advance one step in the simulation, and update the data collection and scheduler appropriately
"""
super().step()
# self.logger.info(
# "--- Step: {:^5} - Time: {now:^5} ---", steps=self.schedule.steps, now=self.now
# )
self.schedule.step()
self.datacollector.collect(self)
if self.logger.isEnabledFor(logging.DEBUG):
msg = "Model data:\n"
max_width = max(len(k) for k in self.datacollector.model_vars.keys())
for (k, v) in self.datacollector.model_vars.items():
msg += f"\t{k:<{max_width}}: {v[-1]:>6}\n"
self.logger.debug(f"--- Steps: {self.schedule.steps:^5} - Time: {self.now:^5} --- " + msg)
def add_model_reporter(self, name, func=None):
if not func:
func = lambda env: getattr(env, name)
self.datacollector._new_model_reporter(name, func)
def add_agent_reporter(self, name, agent_type=None):
if agent_type:
reporter = lambda a: getattr(a, name) if isinstance(a, agent_type) else None
else:
reporter = lambda a: getattr(a, name, None)
self.datacollector._new_agent_reporter(name, reporter)
@classmethod
def run(cls, *,
iterations=1,
num_processes=1, **kwargs):
from .simulation import Simulation
return Simulation(name=cls.__name__,
model=cls, iterations=iterations,
num_processes=num_processes, **kwargs).run()
def __getitem__(self, key):
try:
return getattr(self, key)
@@ -214,67 +259,72 @@ class NetworkEnvironment(BaseEnvironment):
and methods to associate agents to nodes and vice versa.
"""
def __init__(
self, *args, topology: Union[config.NetConfig, nx.Graph] = None, **kwargs
):
agents = kwargs.pop("agents", None)
super().__init__(*args, agents=None, **kwargs)
def __init__(self,
*args,
topology: Optional[Union[nx.Graph, str]] = None,
agent_class: Optional[Type[agentmod.Agent]] = None,
network_generator: Optional[Callable] = None,
network_params: Optional[Dict] = {},
init=True,
**kwargs):
self.topology = topology
self.network_generator = network_generator
self.network_params = network_params
if topology or network_params or network_generator:
self.create_network(topology, generator=network_generator, **network_params)
else:
self.G = nx.Graph()
super().__init__(*args, **kwargs, init=False)
if topology is None:
topology = nx.Graph()
elif not isinstance(topology, nx.Graph):
topology = network.from_config(topology, dir_path=self.dir_path)
self.agent_class = agent_class
if agent_class:
self.agent_class = serialization.deserialize(agent_class)
if self.agent_class:
self.populate_network(self.agent_class)
self._check_agent_nodes()
if init:
self.init()
self.datacollector.collect(self)
def add_agent(self, agent_class, *args, node_id=None, topology=None, **kwargs):
if node_id is None and topology is None:
return super().add_agent(agent_class, *args, **kwargs)
try:
a = super().add_agent(agent_class, *args, node_id=node_id, **kwargs)
except TypeError:
self.logger.warning(f"Agent constructor for {agent_class} does not have a node_id attribute. Might be a bug.")
a = super().add_agent(agent_class, *args, **kwargs)
self.G.nodes[node_id]["agent"] = a
return a
def add_agents(self, *args, k=None, **kwargs):
if not k and not self.G:
raise ValueError("Cannot add agents to an empty network")
super().add_agents(*args, k=k or len(self.G), **kwargs)
def create_network(self, topology=None, generator=None, path=None, **network_params):
if topology is not None:
topology = network.from_topology(topology, dir_path=self.dir_path)
elif path is not None:
topology = network.from_topology(path, dir_path=self.dir_path)
elif generator is not None:
topology = network.from_params(generator=generator, dir_path=self.dir_path, **network_params)
else:
raise ValueError("topology must be a networkx.Graph or a string, or network_generator must be provided")
self.G = topology
self.init_agents(agents)
def init_agents(self, *args, **kwargs):
"""Initialize the agents from a"""
super().init_agents(*args, **kwargs)
for agent in self.schedule._agents.values():
self._init_node(agent)
def _init_node(self, agent):
"""
Make sure the node for a given agent has the proper attributes.
"""
if hasattr(agent, "node_id"):
self.G.nodes[agent.node_id]["agent"] = agent
def _agent_dict_from_config(self, cfg):
return agentmod.from_config(cfg, topology=self.G, random=self.random)
def _agent_from_dict(self, agent, unique_id=None):
agent = dict(agent)
if not agent.get("topology", False):
return super()._agent_from_dict(agent)
if unique_id is None:
unique_id = self.next_id()
node_id = agent.get("node_id", None)
if node_id is None:
node_id = network.find_unassigned(self.G, random=self.random)
self.G.nodes[node_id]["agent"] = None
agent["node_id"] = node_id
agent["unique_id"] = unique_id
agent["topology"] = self.G
node_attrs = self.G.nodes[node_id]
node_attrs.pop('agent', None)
node_attrs.update(agent)
agent = node_attrs
a = super()._agent_from_dict(agent)
self._init_node(a)
return a
@property
def network_agents(self):
for a in self.schedule._agents.values():
if isinstance(a, agentmod.NetworkAgent):
yield a
"""Return agents still alive and assigned to a node in the network."""
for (id, data) in self.G.nodes(data=True):
if "agent" in data:
agent = data["agent"]
if getattr(agent, "alive", True):
yield agent
def add_node(self, agent_class, unique_id=None, node_id=None, **kwargs):
if unique_id is None:
@@ -290,7 +340,6 @@ class NetworkEnvironment(BaseEnvironment):
self.G.add_node(node_id)
assert "agent" not in self.G.nodes[node_id]
self.G.nodes[node_id]["agent"] = None # Reserve
a = self.add_agent(
unique_id=unique_id,
@@ -302,24 +351,56 @@ class NetworkEnvironment(BaseEnvironment):
a["visible"] = True
return a
def add_agent(self, *args, **kwargs):
a = super().add_agent(*args, **kwargs)
if hasattr(a, "node_id"):
assert self.G.nodes[a.node_id]["agent"] == a
return a
def _check_agent_nodes(self):
"""
Detect nodes that have agents assigned to them.
"""
for (id, data) in self.G.nodes(data=True):
if "agent_id" in data:
agent = self.agents(data["agent_id"])
self.G.nodes[id]["agent"] = agent
assert not getattr(agent, "node_id", None) or agent.node_id == id
agent.node_id = id
for agent in self.agents():
if hasattr(agent, "node_id"):
node_id = agent["node_id"]
if node_id not in self.G.nodes:
raise ValueError(f"Agent {agent} is assigned to node {agent.node_id} which is not in the network")
node = self.G.nodes[node_id]
if node.get("agent") is not None and node["agent"] != agent:
raise ValueError(f"Node {node_id} already has a different agent assigned to it")
self.G.nodes[node_id]["agent"] = agent
def add_agents(self, agent_classes: List[type], k=None, weights: Optional[List[float]] = None, **kwargs):
if k is None:
k = len(self.G)
if not k:
raise ValueError("Cannot add agents to an empty network")
super().add_agents(agent_classes, k=k, weights=weights, **kwargs)
def agent_for_node_id(self, node_id):
return self.G.nodes[node_id].get("agent")
def populate_network(self, agent_class, weights=None, **agent_params):
if not hasattr(agent_class, "len"):
def populate_network(self, agent_class: List[Model], weights: List[float] = None, **agent_params):
if isinstance(agent_class, type):
agent_class = [agent_class]
weights = None
for (node_id, node) in self.G.nodes(data=True):
else:
agent_class = list(agent_class)
if not weights:
weights = [1] * len(agent_class)
assert len(self.G)
classes = self.random.choices(agent_class, weights, k=len(self.G))
toadd = []
for (cls, (node_id, node)) in zip(classes, self.G.nodes(data=True)):
if "agent" in node:
continue
a_class = self.random.choices(agent_class, weights)[0]
self.add_agent(node_id=node_id, topology=self.G, agent_class=a_class, **agent_params)
node["agent"] = None # Reserve
toadd.append(dict(node_id=node_id, topology=self.G, agent_class=cls, **agent_params))
for d in toadd:
a = self.add_agent(**d)
self.G.nodes[d["node_id"]]["agent"] = a
assert all("agent" in node for (_, node) in self.G.nodes(data=True))
assert len(list(self.network_agents))
class EventedEnvironment(BaseEnvironment):
@@ -327,7 +408,7 @@ class EventedEnvironment(BaseEnvironment):
for agent in self.agents(**kwargs):
if agent == sender:
continue
self.logger.info(f"Telling {repr(agent)}: {msg} ttl={ttl}")
self.logger.debug(f"Telling {repr(agent)}: {msg} ttl={ttl}")
try:
inbox = agent._inbox
except AttributeError:

View File

@@ -8,9 +8,10 @@ from textwrap import dedent, indent
import matplotlib.pyplot as plt
import networkx as nx
import pandas as pd
from .serialization import deserialize
from .serialization import deserialize, serialize
from .utils import try_backup, open_or_reuse, logger, timer
@@ -38,7 +39,7 @@ class DryRunner(BytesIO):
except UnicodeDecodeError:
pass
logger.info(
"**Not** written to {} (dry run mode):\n\n{}\n\n".format(
"**Not** written to {} (no_dump mode):\n\n{}\n\n".format(
self.__fname, content
)
)
@@ -51,12 +52,12 @@ class Exporter:
if you don't plan to implement all the methods.
"""
def __init__(self, simulation, outdir=None, dry_run=None, copy_to=None):
def __init__(self, simulation, outdir=None, dump=True, copy_to=None):
self.simulation = simulation
outdir = outdir or os.path.join(os.getcwd(), "soil_output")
self.outdir = os.path.join(outdir, simulation.group or "", simulation.name)
self.dry_run = dry_run
if copy_to is None and dry_run:
self.dump = dump
if copy_to is None and not dump:
copy_to = sys.stdout
self.copy_to = copy_to
@@ -68,16 +69,16 @@ class Exporter:
"""Method to call when the simulation ends"""
pass
def trial_start(self, env):
"""Method to call when a trial start"""
def iteration_start(self, env):
"""Method to call when a iteration start"""
pass
def trial_end(self, env):
"""Method to call when a trial ends"""
def iteration_end(self, env, params, params_id):
"""Method to call when a iteration ends"""
pass
def output(self, f, mode="w", **kwargs):
if self.dry_run:
if not self.dump:
f = DryRunner(f, copy_to=self.copy_to)
else:
try:
@@ -85,74 +86,104 @@ class Exporter:
f = os.path.join(self.outdir, f)
except TypeError:
pass
return open_or_reuse(f, mode=mode, **kwargs)
return open_or_reuse(f, mode=mode, backup=self.simulation.backup, **kwargs)
def get_dfs(self, env):
yield from get_dc_dfs(env.datacollector, trial_id=env.id)
def get_dfs(self, env, **kwargs):
yield from get_dc_dfs(env.datacollector,
simulation_id=self.simulation.id,
iteration_id=env.id,
**kwargs)
def get_dc_dfs(dc, trial_id=None):
dfs = {
"env": dc.get_model_vars_dataframe(),
"agents": dc.get_agent_vars_dataframe(),
}
def get_dc_dfs(dc, **kwargs):
dfs = {}
dfe = dc.get_model_vars_dataframe()
dfe.index.rename("step", inplace=True)
dfs["env"] = dfe
try:
dfa = dc.get_agent_vars_dataframe()
dfa.index.rename(["step", "agent_id"], inplace=True)
dfs["agents"] = dfa
except UserWarning:
pass
for table_name in dc.tables:
dfs[table_name] = dc.get_table_dataframe(table_name)
if trial_id:
for (name, df) in dfs.items():
df["trial_id"] = trial_id
for (name, df) in dfs.items():
for (k, v) in kwargs.items():
df[k] = v
df.set_index(["simulation_id", "iteration_id"], append=True, inplace=True)
yield from dfs.items()
class SQLite(Exporter):
"""Writes sqlite results"""
sim_started = False
def sim_start(self):
if self.dry_run:
logger.info("NOT dumping results")
if not self.dump:
logger.debug("NOT dumping results")
return
self.dbpath = os.path.join(self.outdir, f"{self.simulation.name}.sqlite")
logger.info("Dumping results to %s", self.dbpath)
try_backup(self.dbpath, remove=True)
if self.simulation.backup:
try_backup(self.dbpath, remove=True)
if self.simulation.overwrite:
if os.path.exists(self.dbpath):
os.remove(self.dbpath)
self.engine = create_engine(f"sqlite:///{self.dbpath}", echo=False)
def trial_end(self, env):
if self.dry_run:
logger.info("Running in DRY_RUN mode, the database will NOT be created")
sim_dict = {k: serialize(v)[0] for (k,v) in self.simulation.to_dict().items()}
sim_dict["simulation_id"] = self.simulation.id
df = pd.DataFrame([sim_dict])
df.to_sql("configuration", con=self.engine, if_exists="append")
def iteration_end(self, env, params, params_id, *args, **kwargs):
if not self.dump:
logger.info("Running in NO DUMP mode. Results will NOT be saved to a DB.")
return
with timer(
"Dumping simulation {} trial {}".format(self.simulation.name, env.id)
"Dumping simulation {} iteration {}".format(self.simulation.name, env.id)
):
engine = create_engine(f"sqlite:///{self.dbpath}", echo=False)
pd.DataFrame([{"simulation_id": self.simulation.id,
"params_id": params_id,
"iteration_id": env.id,
"key": k,
"value": serialize(v)[0]} for (k,v) in params.items()]).to_sql("parameters", con=self.engine, if_exists="append")
for (t, df) in self.get_dfs(env):
df.to_sql(t, con=engine, if_exists="append")
for (t, df) in self.get_dfs(env, params_id=params_id):
df.to_sql(t, con=self.engine, if_exists="append")
class csv(Exporter):
"""Export the state of each environment (and its agents) a CSV file for the simulation"""
"""Export the state of each environment (and its agents) in a separate CSV file"""
def sim_start(self):
super().sim_start()
def trial_end(self, env):
def iteration_end(self, env, params, params_id, *args, **kwargs):
with timer(
"[CSV] Dumping simulation {} trial {} @ dir {}".format(
"[CSV] Dumping simulation {} iteration {} @ dir {}".format(
self.simulation.name, env.id, self.outdir
)
):
for (df_name, df) in self.get_dfs(env):
with self.output("{}.{}.csv".format(env.id, df_name)) as f:
for (df_name, df) in self.get_dfs(env, params_id=params_id):
with self.output("{}.{}.csv".format(env.id, df_name), mode="a") as f:
df.to_csv(f)
# TODO: reimplement GEXF exporting without history
class gexf(Exporter):
def trial_end(self, env):
if self.dry_run:
logger.info("Not dumping GEXF in dry_run mode")
def iteration_end(self, env, *args, **kwargs):
if not self.dump:
logger.info("Not dumping GEXF (NO_DUMP mode)")
return
with timer(
"[GEXF] Dumping simulation {} trial {}".format(self.simulation.name, env.id)
"[GEXF] Dumping simulation {} iteration {}".format(self.simulation.name, env.id)
):
with self.output("{}.gexf".format(env.id), mode="wb") as f:
network.dump_gexf(env.history_to_graph(), f)
@@ -164,13 +195,13 @@ class dummy(Exporter):
with self.output("dummy", "w") as f:
f.write("simulation started @ {}\n".format(current_time()))
def trial_start(self, env):
def iteration_start(self, env):
with self.output("dummy", "w") as f:
f.write("trial started@ {}\n".format(current_time()))
f.write("iteration started@ {}\n".format(current_time()))
def trial_end(self, env):
def iteration_end(self, env, *args, **kwargs):
with self.output("dummy", "w") as f:
f.write("trial ended@ {}\n".format(current_time()))
f.write("iteration ended@ {}\n".format(current_time()))
def sim_end(self):
with self.output("dummy", "a") as f:
@@ -178,7 +209,7 @@ class dummy(Exporter):
class graphdrawing(Exporter):
def trial_end(self, env):
def iteration_end(self, env, *args, **kwargs):
# Outside effects
f = plt.figure()
nx.draw(
@@ -193,9 +224,9 @@ class graphdrawing(Exporter):
class summary(Exporter):
"""Print a summary of each trial to sys.stdout"""
"""Print a summary of each iteration to sys.stdout"""
def trial_end(self, env):
def iteration_end(self, env, *args, **kwargs):
msg = ""
for (t, df) in self.get_dfs(env):
if not len(df):
@@ -224,10 +255,10 @@ class YAML(Exporter):
"""Writes the configuration of the simulation to a YAML file"""
def sim_start(self):
if self.dry_run:
logger.info("NOT dumping results")
if not self.dump:
logger.debug("NOT dumping results")
return
with self.output(self.simulation.name + ".dumped.yml") as f:
with self.output(self.simulation.id + ".dumped.yml") as f:
logger.info(f"Dumping simulation configuration to {self.outdir}")
f.write(self.simulation.to_yaml())
@@ -235,22 +266,17 @@ class default(Exporter):
"""Default exporter. Writes sqlite results, as well as the simulation YAML"""
def __init__(self, *args, exporter_cls=[], **kwargs):
exporter_cls = exporter_cls or [YAML, SQLite, summary]
exporter_cls = exporter_cls or [YAML, SQLite]
self.inner = [cls(*args, **kwargs) for cls in exporter_cls]
def sim_start(self):
def sim_start(self, *args, **kwargs):
for exporter in self.inner:
exporter.sim_start()
exporter.sim_start(*args, **kwargs)
def sim_end(self):
def sim_end(self, *args, **kwargs):
for exporter in self.inner:
exporter.sim_end()
exporter.sim_end(*args, **kwargs)
def trial_start(self, env):
def iteration_end(self, *args, **kwargs):
for exporter in self.inner:
exporter.trial_start(env)
def trial_end(self, env):
for exporter in self.inner:
exporter.trial_end(env)
exporter.iteration_end(*args, **kwargs)

View File

@@ -10,47 +10,47 @@ import networkx as nx
from . import config, serialization, basestring
def from_config(cfg: config.NetConfig, dir_path: str = None):
if not isinstance(cfg, config.NetConfig):
cfg = config.NetConfig(**cfg)
def from_topology(topology, dir_path: str = None):
if topology is None:
return nx.Graph()
if isinstance(topology, nx.Graph):
return topology
if cfg.path:
path = cfg.path
if dir_path and not os.path.isabs(path):
path = os.path.join(dir_path, path)
extension = os.path.splitext(path)[1][1:]
kwargs = {}
if extension == "gexf":
kwargs["version"] = "1.2draft"
kwargs["node_type"] = int
# If it's a dict, assume it's a node-link graph
if isinstance(topology, dict):
try:
method = getattr(nx.readwrite, "read_" + extension)
except AttributeError:
raise AttributeError("Unknown format")
return method(path, **kwargs)
return nx.json_graph.node_link_graph(topology)
except Exception as ex:
raise ValueError("Unknown topology format")
# Otherwise, treat like a path
path = topology
if dir_path and not os.path.isabs(path):
path = os.path.join(dir_path, path)
extension = os.path.splitext(path)[1][1:]
kwargs = {}
if extension == "gexf":
kwargs["version"] = "1.2draft"
kwargs["node_type"] = int
try:
method = getattr(nx.readwrite, "read_" + extension)
except AttributeError:
raise AttributeError("Unknown format")
return method(path, **kwargs)
if cfg.params:
net_args = dict(cfg.params)
net_gen = net_args.pop("generator")
if dir_path not in sys.path:
sys.path.append(dir_path)
def from_params(generator, dir_path: str = None, **params):
method = serialization.deserializer(
net_gen,
known_modules=[
"networkx.generators",
],
)
return method(**net_args)
if dir_path not in sys.path:
sys.path.append(dir_path)
if isinstance(cfg.fixed, config.Topology):
cfg = cfg.fixed.dict()
if isinstance(cfg, str) or isinstance(cfg, dict):
return nx.json_graph.node_link_graph(cfg)
return nx.Graph()
method = serialization.deserializer(
generator,
known_modules=[
"networkx.generators",
],
)
return method(**params)
def find_unassigned(G, shuffle=False, random=random):

32
soil/parameters.py Normal file
View File

@@ -0,0 +1,32 @@
from __future__ import annotations
from typing_extensions import Annotated
import annotated_types
from typing import *
from dataclasses import dataclass
class Parameter:
pass
def floatrange(
*,
gt: Optional[float] = None,
ge: Optional[float] = None,
lt: Optional[float] = None,
le: Optional[float] = None,
multiple_of: Optional[float] = None,
) -> type[float]:
return Annotated[
float,
annotated_types.Interval(gt=gt, ge=ge, lt=lt, le=le),
annotated_types.MultipleOf(multiple_of) if multiple_of is not None else None,
]
function = Annotated[Callable, Parameter]
Integer = Annotated[int, Parameter]
Float = Annotated[float, Parameter]
probability = floatrange(ge=0, le=1)

View File

@@ -4,14 +4,15 @@ import ast
import sys
import re
import importlib
import importlib.machinery, importlib.util
from glob import glob
from itertools import product, chain
from .config import Config
import yaml
import networkx as nx
from . import config
from jinja2 import Template
@@ -90,23 +91,55 @@ def load_files(*patterns, **kwargs):
for i in glob(pattern, **kwargs, recursive=True):
for cfg in load_file(i):
path = os.path.abspath(i)
yield Config.from_raw(cfg), path
yield cfg, path
def load_config(cfg):
if isinstance(cfg, Config):
yield cfg, os.getcwd()
elif isinstance(cfg, dict):
yield Config.from_raw(cfg), os.getcwd()
if isinstance(cfg, dict):
yield config.load_config(cfg), os.getcwd()
else:
yield from load_files(cfg)
builtins = importlib.import_module("builtins")
KNOWN_MODULES = [
"soil",
]
KNOWN_MODULES = {
'soil': None,
}
MODULE_FILES = {}
def add_source_file(file):
"""Add a file to the list of known modules"""
file = os.path.abspath(file)
if file in MODULE_FILES:
logger.warning(f"File {file} already added as module {MODULE_FILES[file]}. Reloading")
remove_source_file(file)
modname = f"imported_module_{len(MODULE_FILES)}"
loader = importlib.machinery.SourceFileLoader(modname, file)
spec = importlib.util.spec_from_loader(loader.name, loader)
my_module = importlib.util.module_from_spec(spec)
loader.exec_module(my_module)
MODULE_FILES[file] = modname
KNOWN_MODULES[modname] = my_module
def remove_source_file(file):
"""Remove a file from the list of known modules"""
file = os.path.abspath(file)
modname = None
try:
modname = MODULE_FILES.pop(file)
KNOWN_MODULES.pop(modname)
except KeyError as ex:
raise ValueError(f"File {file} had not been added as a module: {ex}")
def get_module(modname):
"""Get a module from the list of known modules"""
if modname not in KNOWN_MODULES or KNOWN_MODULES[modname] is None:
module = importlib.import_module(modname)
KNOWN_MODULES[modname] = module
return KNOWN_MODULES[modname]
def name(value, known_modules=KNOWN_MODULES):
@@ -124,9 +157,7 @@ def name(value, known_modules=KNOWN_MODULES):
if known_modules and modname in known_modules:
return tname
for kmod in known_modules:
if not kmod:
continue
module = importlib.import_module(kmod)
module = get_module(kmod)
if hasattr(module, tname):
return tname
return "{}.{}".format(modname, tname)
@@ -150,7 +181,7 @@ def serialize_dict(d, known_modules=KNOWN_MODULES):
d = dict(d)
except (ValueError, TypeError) as ex:
return serialize(d)[0]
for (k, v) in d.items():
for (k, v) in reversed(list(d.items())):
if isinstance(v, dict):
d[k] = serialize_dict(v, known_modules=known_modules)
elif isinstance(v, list):
@@ -177,7 +208,7 @@ def deserializer(type_, known_modules=KNOWN_MODULES):
match = IS_CLASS.match(type_)
if match:
modname, tname = match.group(1).rsplit(".", 1)
module = importlib.import_module(modname)
module = get_module(modname)
cls = getattr(module, tname)
return getattr(cls, "deserialize", cls)
@@ -195,7 +226,7 @@ def deserializer(type_, known_modules=KNOWN_MODULES):
errors = []
for modname, tname in options:
try:
module = importlib.import_module(modname)
module = get_module(modname)
cls = getattr(module, tname)
return getattr(cls, "deserialize", cls)
except (ImportError, AttributeError) as ex:

View File

@@ -1,87 +1,153 @@
import os
from time import time as current_time, strftime
import importlib
import sys
import yaml
import traceback
import hashlib
import inspect
import logging
import networkx as nx
from tqdm.auto import tqdm
from textwrap import dedent
from dataclasses import dataclass, field, asdict
from dataclasses import dataclass, field, asdict, replace
from typing import Any, Dict, Union, Optional, List
from networkx.readwrite import json_graph
from functools import partial
import pickle
from contextlib import contextmanager
from itertools import product
import json
from . import serialization, exporters, utils, basestring, agents
from .environment import Environment
from .utils import logger, run_and_return_exceptions
from .config import Config, convert_old
from .debugging import set_trace
_AVOID_RUNNING = False
_QUEUED = []
@contextmanager
def do_not_run():
global _AVOID_RUNNING
_AVOID_RUNNING = True
try:
logger.debug("NOT RUNNING")
yield
finally:
logger.debug("RUNNING AGAIN")
_AVOID_RUNNING = False
def _iter_queued():
while _QUEUED:
(cls, params) = _QUEUED.pop(0)
yield replace(cls, parameters=params)
# TODO: change documentation for simulation
# TODO: rename iterations to iterations
# TODO: make parameters a dict of iterable/any
@dataclass
class Simulation:
"""
Parameters
---------
config (optional): :class:`config.Config`
name of the Simulation
A simulation is a collection of agents and a model. It is responsible for running the model and agents, and collecting data from them.
kwargs: parameters to use to initialize a new configuration, if one not been provided.
Args:
version: The version of the simulation. This is used to determine how to load the simulation.
name: The name of the simulation.
description: A description of the simulation.
group: The group that the simulation belongs to.
model: The model to use for the simulation. This can be a string or a class.
parameters: The parameters to pass to the model.
matrix: A matrix of values for each parameter.
seed: The seed to use for the simulation.
dir_path: The directory path to use for the simulation.
max_time: The maximum time to run the simulation.
max_steps: The maximum number of steps to run the simulation.
interval: The interval to use for the simulation.
iterations: The number of iterations (times) to run the simulation.
num_processes: The number of processes to use for the simulation. If greater than one, simulations will be performed in parallel. This may make debugging and error handling difficult.
tables: The tables to use in the simulation datacollector
agent_reporters: The agent reporters to use in the datacollector
model_reporters: The model reporters to use in the datacollector
dry_run: Whether or not to run the simulation. If True, the simulation will not be run.
backup: Whether or not to backup the simulation. If True, the simulation files will be backed up to a different directory.
overwrite: Whether or not to replace existing simulation data.
source_file: Python file to use to find additional classes.
"""
version: str = "2"
name: str = "Unnamed simulation"
source_file: Optional[str] = None
name: Optional[str] = None
description: Optional[str] = ""
group: str = None
model_class: Union[str, type] = "soil.Environment"
model_params: dict = field(default_factory=dict)
seed: str = field(default_factory=lambda: current_time())
backup: bool = False
overwrite: bool = False
dry_run: bool = False
dump: bool = False
model: Union[str, type] = "soil.Environment"
parameters: dict = field(default_factory=dict)
matrix: dict = field(default_factory=dict)
seed: str = "default"
dir_path: str = field(default_factory=lambda: os.getcwd())
max_time: float = float("inf")
max_steps: int = -1
max_time: float = None
max_steps: int = None
interval: int = 1
num_trials: int = 1
iterations: int = 1
num_processes: Optional[int] = 1
parallel: Optional[bool] = False
exporters: Optional[List[str]] = field(default_factory=lambda: [exporters.default])
model_reporters: Optional[Dict[str, Any]] = field(default_factory=dict)
agent_reporters: Optional[Dict[str, Any]] = field(default_factory=dict)
tables: Optional[Dict[str, Any]] = field(default_factory=dict)
outdir: Optional[str] = None
outdir: str = field(default_factory=lambda: os.path.join(os.getcwd(), "soil_output"))
# outdir: Optional[str] = None
exporter_params: Optional[Dict[str, Any]] = field(default_factory=dict)
dry_run: bool = False
extra: Dict[str, Any] = field(default_factory=dict)
level: int = logging.INFO
skip_test: Optional[bool] = False
debug: Optional[bool] = False
@classmethod
def from_dict(cls, env, **kwargs):
def __post_init__(self):
if self.name is None:
if isinstance(self.model, str):
self.name = self.model
else:
self.name = self.model.__name__
self.logger = logger.getChild(self.name)
self.logger.setLevel(self.level)
ignored = {
k: v for k, v in env.items() if k not in inspect.signature(cls).parameters
}
if self.source_file:
source_file = self.source_file
if not os.path.isabs(source_file):
source_file = os.path.abspath(os.path.join(self.dir_path, source_file))
serialization.add_source_file(source_file)
self.source_file = source_file
d = {k: v for k, v in env.items() if k not in ignored}
if ignored:
d.setdefault("extra", {}).update(ignored)
if ignored:
logger.warning(f'Ignoring these parameters (added to "extra"): { ignored }')
d.update(kwargs)
if isinstance(self.model, str):
self.model = serialization.deserialize(self.model)
return cls(**d)
def deserialize_reporters(reporters):
for (k, v) in reporters.items():
if isinstance(v, str) and v.startswith("py:"):
reporters[k] = serialization.deserialize(v.split(":", 1)[1])
return reporters
def run_simulation(self, *args, **kwargs):
return self.run(*args, **kwargs)
self.agent_reporters = deserialize_reporters(self.agent_reporters)
self.model_reporters = deserialize_reporters(self.model_reporters)
self.tables = deserialize_reporters(self.tables)
if self.source_file:
serialization.remove_source_file(self.source_file)
self.id = f"{self.name}_{current_time()}"
def run(self, *args, **kwargs):
def run(self, **kwargs):
"""Run the simulation and return the list of resulting environments"""
logger.info(
if kwargs:
return replace(self, **kwargs).run()
self.logger.debug(
dedent(
"""
Simulation:
@@ -90,156 +156,157 @@ class Simulation:
)
+ self.to_yaml()
)
return list(self.run_gen(*args, **kwargs))
param_combinations = self._collect_params(**kwargs)
if _AVOID_RUNNING:
_QUEUED.extend((self, param) for param in param_combinations)
return []
def run_gen(
self,
num_processes=1,
dry_run=None,
exporters=None,
outdir=None,
exporter_params={},
log_level=None,
**kwargs,
):
"""Run the simulation and yield the resulting environments."""
if log_level:
logger.setLevel(log_level)
outdir = outdir or self.outdir
logger.info("Using exporters: %s", exporters or [])
logger.info("Output directory: %s", outdir)
if dry_run is None:
dry_run = self.dry_run
if exporters is None:
exporters = self.exporters
if not exporter_params:
exporter_params = self.exporter_params
self.logger.debug("Using exporters: %s", self.exporters or [])
exporters = serialization.deserialize_all(
exporters,
self.exporters,
simulation=self,
known_modules=[
"soil.exporters",
],
dry_run=dry_run,
outdir=outdir,
**exporter_params,
dump=self.dump and not self.dry_run,
outdir=self.outdir,
**self.exporter_params,
)
with utils.timer("simulation {}".format(self.name)):
for exporter in exporters:
exporter.sim_start()
for env in utils.run_parallel(
func=self.run_trial,
iterable=range(int(self.num_trials)),
num_processes=num_processes,
log_level=log_level,
**kwargs,
):
results = []
for exporter in exporters:
exporter.sim_start()
for params in tqdm(param_combinations, desc=self.name, unit="configuration"):
for (k, v) in params.items():
tqdm.write(f"{k} = {v}")
sha = hashlib.sha256()
sha.update(repr(sorted(params.items())).encode())
params_id = sha.hexdigest()[:7]
for env in self._run_iters_for_params(params):
for exporter in exporters:
exporter.trial_start(env)
exporter.iteration_end(env, params, params_id)
results.append(env)
for exporter in exporters:
exporter.trial_end(env)
for exporter in exporters:
exporter.sim_end()
yield env
return results
for exporter in exporters:
exporter.sim_end()
def _collect_params(self):
def get_env(self, trial_id=0, model_params=None, **kwargs):
"""Create an environment for a trial of the simulation"""
parameters = []
if self.parameters:
parameters.append(self.parameters)
if self.matrix:
assert isinstance(self.matrix, dict)
for values in product(*(self.matrix.values())):
parameters.append(dict(zip(self.matrix.keys(), values)))
def deserialize_reporters(reporters):
for (k, v) in reporters.items():
if isinstance(v, str) and v.startswith("py:"):
reporters[k] = serialization.deserialize(v.split(":", 1)[1])
return reporters
if not parameters:
parameters = [{}]
if self.dump:
self.logger.info("Output directory: %s", self.outdir)
params = self.model_params.copy()
if model_params:
params.update(model_params)
params.update(kwargs)
return parameters
agent_reporters = self.agent_reporters.copy()
agent_reporters.update(deserialize_reporters(params.pop("agent_reporters", {})))
model_reporters = self.model_reporters.copy()
model_reporters.update(deserialize_reporters(params.pop("model_reporters", {})))
tables = self.tables.copy()
tables.update(deserialize_reporters(params.pop("tables", {})))
def _run_iters_for_params(
self,
params
):
"""Run the simulation and yield the resulting environments."""
env = serialization.deserialize(self.model_class)
return env(
id=f"{self.name}_trial_{trial_id}",
seed=f"{self.seed}_trial_{trial_id}",
try:
if self.source_file:
serialization.add_source_file(self.source_file)
with utils.timer(f"running for config {params}"):
if self.dry_run:
def func(*args, **kwargs):
return None
else:
func = self._run_model
for env in tqdm(utils.run_parallel(
func=func,
iterable=range(self.iterations),
**params,
), total=self.iterations, leave=False):
if env is None and self.dry_run:
continue
yield env
finally:
if self.source_file:
serialization.remove_source_file(self.source_file)
def _get_env(self, iteration_id, params):
"""Create an environment for a iteration of the simulation"""
iteration_id = str(iteration_id)
agent_reporters = self.agent_reporters
agent_reporters.update(params.pop("agent_reporters", {}))
model_reporters = self.model_reporters
model_reporters.update(params.pop("model_reporters", {}))
return self.model(
id=iteration_id,
seed=f"{self.seed}_iteration_{iteration_id}",
dir_path=self.dir_path,
interval=self.interval,
logger=self.logger.getChild(iteration_id),
agent_reporters=agent_reporters,
model_reporters=model_reporters,
tables=tables,
tables=self.tables,
**params,
)
def run_trial(
self, trial_id=None, until=None, log_file=False, log_level=logging.INFO, **opts
):
def _run_model(self, iteration_id, **params):
"""
Run a single trial of the simulation
Run a single iteration of the simulation
"""
if log_level:
logger.setLevel(log_level)
model = self.get_env(trial_id, **opts)
trial_id = trial_id if trial_id is not None else current_time()
with utils.timer("Simulation {} trial {}".format(self.name, trial_id)):
return self.run_model(
model=model, trial_id=trial_id, until=until, log_level=log_level
# Set-up iteration environment and graph
model = self._get_env(iteration_id, params)
with utils.timer("Simulation {} iteration {}".format(self.name, iteration_id)):
max_time = self.max_time
max_steps = self.max_steps
if (max_time is not None) and (max_steps is not None):
is_done = lambda model: (not model.running) or (model.schedule.time >= max_time) or (model.schedule.steps >= max_steps)
elif max_time is not None:
is_done = lambda model: (not model.running) or (model.schedule.time >= max_time)
elif max_steps is not None:
is_done = lambda model: (not model.running) or (model.schedule.steps >= max_steps)
else:
is_done = lambda model: not model.running
if not model.schedule.agents:
raise Exception("No agents in model. This is probably a bug. Make sure that the model has agents scheduled after its initialization.")
newline = "\n"
self.logger.debug(
dedent(
f"""
Model stats:
Agent count: { model.schedule.get_agent_count() }):
Topology size: { len(model.G) if hasattr(model, "G") else 0 }
"""
)
)
def run_model(self, model, until=None, **opts):
# Set-up trial environment and graph
until = float(until or self.max_time or "inf")
if self.debug:
set_trace()
# Set up agents on nodes
def is_done():
return not model.running
while not is_done(model):
self.logger.debug(
f'Simulation time {model.schedule.time}/{max_time}.'
)
model.step()
if until and hasattr(model.schedule, "time"):
prev = is_done
def is_done():
return prev() or model.schedule.time >= until
if self.max_steps and self.max_steps > 0 and hasattr(model.schedule, "steps"):
prev_steps = is_done
def is_done():
return prev_steps() or model.schedule.steps >= self.max_steps
newline = "\n"
logger.info(
dedent(
f"""
Model stats:
Agents (total: { model.schedule.get_agent_count() }):
- { (newline + ' - ').join(str(a) for a in model.schedule.agents) }
Topology size: { len(model.G) if hasattr(model, "G") else 0 }
"""
)
)
while not is_done():
utils.logger.debug(
f'Simulation time {model.schedule.time}/{until}. Next: {getattr(model.schedule, "next_time", model.schedule.time + self.interval)}'
)
model.step()
if (
model.schedule.time < until
): # Simulation ended (no more steps) before the expected time
model.schedule.time = until
return model
def to_dict(self):
@@ -250,14 +317,27 @@ Model stats:
return yaml.dump(self.to_dict())
def iter_from_file(*files, **kwargs):
for f in files:
try:
yield from iter_from_py(f, **kwargs)
except ValueError as ex:
yield from iter_from_config(f, **kwargs)
def from_file(*args, **kwargs):
return list(iter_from_file(*args, **kwargs))
def iter_from_config(*cfgs, **kwargs):
for config in cfgs:
configs = list(serialization.load_config(config))
for config, path in configs:
d = dict(config)
d.update(kwargs)
if "dir_path" not in d:
d["dir_path"] = os.path.dirname(path)
yield Simulation.from_dict(d, **kwargs)
yield Simulation(**d)
def from_config(conf_or_path):
@@ -266,26 +346,50 @@ def from_config(conf_or_path):
raise AttributeError("Provide only one configuration")
return lst[0]
def iter_from_py(pyfile, module_name='custom_simulation'):
def iter_from_py(pyfile, module_name='imported_file', **kwargs):
"""Try to load every Simulation instance in a given Python file"""
import importlib
import inspect
spec = importlib.util.spec_from_file_location(module_name, pyfile)
module = importlib.util.module_from_spec(spec)
sys.modules[module_name] = module
spec.loader.exec_module(module)
# import pdb;pdb.set_trace()
for (_name, sim) in inspect.getmembers(module, lambda x: isinstance(x, Simulation)):
yield sim
del sys.modules[module_name]
added = False
sims = []
assert not _AVOID_RUNNING
with do_not_run():
assert _AVOID_RUNNING
spec = importlib.util.spec_from_file_location(module_name, pyfile)
folder = os.path.dirname(pyfile)
if folder not in sys.path:
added = True
sys.path.append(folder)
if not spec:
raise ValueError(f"{pyfile} does not seem to be a Python module")
module = importlib.util.module_from_spec(spec)
sys.modules[module_name] = module
spec.loader.exec_module(module)
for (_name, sim) in inspect.getmembers(module, lambda x: isinstance(x, Simulation)):
sims.append(sim)
for sim in _iter_queued():
sims.append(sim)
if not sims:
for (_name, sim) in inspect.getmembers(module, lambda x: inspect.isclass(x) and issubclass(x, Simulation)):
sims.append(sim(**kwargs))
del sys.modules[module_name]
assert not _AVOID_RUNNING
if not sims:
raise AttributeError(f"No valid configurations found in {pyfile}")
if added:
sys.path.remove(folder)
for sim in sims:
yield replace(sim, **kwargs)
def from_py(pyfile):
return next(iter_from_py(pyfile))
def run_from_config(*configs, **kwargs):
for sim in iter_from_config(*configs):
def run_from_file(*files, **kwargs):
for sim in iter_from_file(*files):
logger.info(f"Using config(s): {sim.name}")
sim.run_simulation(**kwargs)
def run(env, iterations=1, num_processes=1, dump=False, name="test", **kwargs):
return Simulation(model=env, iterations=iterations, name=name, dump=dump, num_processes=num_processes, **kwargs).run()

View File

@@ -1,6 +1,6 @@
from mesa.time import BaseScheduler
from queue import Empty
from heapq import heappush, heappop
from heapq import heappush, heappop, heapreplace
import math
from inspect import getsource
@@ -97,8 +97,10 @@ class TimedActivation(BaseScheduler):
self._next = {}
self._queue = []
self._shuffle = shuffle
self.step_interval = 1
self.logger = logger.getChild(f"time_{ self.model }")
# self.step_interval = getattr(self.model, "interval", 1)
self.step_interval = self.model.interval
self.logger = getattr(self.model, "logger", logger).getChild(f"time_{ self.model }")
self.next_time = self.time
def add(self, agent: MesaAgent, when=None):
if when is None:
@@ -109,7 +111,7 @@ class TimedActivation(BaseScheduler):
self._schedule(agent, None, when)
super().add(agent)
def _schedule(self, agent, condition=None, when=None):
def _schedule(self, agent, condition=None, when=None, replace=False):
if condition:
if not when:
when, condition = condition.schedule_next(
@@ -124,7 +126,10 @@ class TimedActivation(BaseScheduler):
else:
key = (when, agent.unique_id, condition)
self._next[agent.unique_id] = key
heappush(self._queue, (key, agent))
if replace:
heapreplace(self._queue, (key, agent))
else:
heappush(self._queue, (key, agent))
def step(self) -> None:
"""
@@ -136,17 +141,16 @@ class TimedActivation(BaseScheduler):
if not self.model.running or self.time == INFINITY:
return
self.logger.debug("Queue length: {ql}", ql=len(self._queue))
self.logger.debug(f"Queue length: %s", len(self._queue))
while self._queue:
((when, _id, cond), agent) = self._queue[0]
if when > self.time:
break
heappop(self._queue)
if cond:
if not cond.ready(agent, self.time):
self._schedule(agent, cond)
self._schedule(agent, cond, replace=True)
continue
try:
agent._last_return = cond.return_value(agent)
@@ -156,43 +160,45 @@ class TimedActivation(BaseScheduler):
agent._last_return = None
agent._last_except = None
self.logger.debug("Stepping agent {agent}", agent=agent)
self.logger.debug("Stepping agent %s", agent)
self._next.pop(agent.unique_id, None)
try:
returned = agent.step()
except DeadAgent:
agent.alive = False
heappop(self._queue)
continue
# Check status for MESA agents
if not getattr(agent, "alive", True):
heappop(self._queue)
continue
if returned:
next_check = returned.schedule_next(
self.time, self.step_interval, first=True
)
self._schedule(agent, when=next_check[0], condition=next_check[1])
self._schedule(agent, when=next_check[0], condition=next_check[1], replace=True)
else:
next_check = (self.time + self.step_interval, None)
self._schedule(agent)
self._schedule(agent, replace=True)
self.steps += 1
if not self._queue:
self.time = INFINITY
self.model.running = False
return self.time
self.time = INFINITY
return
next_time = self._queue[0][0][0]
if next_time < self.time:
raise Exception(
f"An agent has been scheduled for a time in the past, there is probably an error ({when} < {self.time})"
)
self.logger.debug(f"Updating time step: {self.time} -> {next_time}")
self.logger.debug("Updating time step: %s -> %s ", self.time, next_time)
self.time = next_time

View File

@@ -10,7 +10,7 @@ from multiprocessing import Pool, cpu_count
from contextlib import contextmanager
logger = logging.getLogger("soil")
logger.setLevel(logging.INFO)
logger.setLevel(logging.WARNING)
timeformat = "%H:%M:%S"
@@ -24,7 +24,7 @@ consoleHandler = logging.StreamHandler()
consoleHandler.setFormatter(logFormatter)
logging.basicConfig(
level=logging.DEBUG,
level=logging.INFO,
handlers=[
consoleHandler,
],
@@ -60,7 +60,7 @@ def try_backup(path, remove=False):
if not os.path.exists(backup_dir):
os.makedirs(backup_dir)
newpath = os.path.join(backup_dir, "{}@{}".format(os.path.basename(path), stamp))
if move:
if remove:
move(path, newpath)
else:
copyfile(path, newpath)
@@ -126,7 +126,7 @@ def unflatten_dict(d):
def run_and_return_exceptions(func, *args, **kwargs):
"""
A wrapper for run_trial that catches exceptions and returns them.
A wrapper for a function that catches exceptions and returns them.
It is meant for async simulations.
"""
try:
@@ -154,3 +154,7 @@ def run_parallel(func, iterable, num_processes=1, **kwargs):
else:
for i in iterable:
yield func(i, **kwargs)
def int_seed(seed: str):
return int.from_bytes(seed.encode(), "little")

View File

@@ -1,6 +0,0 @@
from mesa.visualization.UserParam import UserSettableParameter
class UserSettableParameter(UserSettableParameter):
def __str__(self):
return self.value

Binary file not shown.

Before

Width:  |  Height:  |  Size: 1.1 MiB

Some files were not shown because too many files have changed in this diff Show More