1
0
mirror of https://github.com/gsi-upm/soil synced 2025-10-19 09:48:29 +00:00

Compare commits

...

8 Commits

Author SHA1 Message Date
J. Fernando Sánchez
2869b1e1e6 Clean-up
* Removed old/unnecessary models
* Added a `simulation.{iter_}from_py` method to load simulations from python
files
* Changed tests of examples to run programmatic simulations
* Fixed programmatic examples
2022-11-13 20:31:05 +01:00
J. Fernando Sánchez
d3cee18635 Add seed to cars example 2022-10-20 14:47:28 +02:00
J. Fernando Sánchez
9a7b62e88e Release 0.30.0rc3 2022-10-20 14:12:34 +02:00
J. Fernando Sánchez
c09e480d37 black formatting 2022-10-20 14:12:10 +02:00
J. Fernando Sánchez
b2d48cb4df Add test cases for 'ASK' 2022-10-20 14:10:34 +02:00
J. Fernando Sánchez
a1262edd2a Refactored time
Treating time and conditions as the same entity was getting confusing, and it
added a lot of unnecessary abstraction in a critical part (the scheduler).

The scheduling queue now has the time as a floating number (faster), the agent
id (for ties) and the condition, as well as the agent. The first three
elements (time, id, condition) can be considered as the "key" for the event.

To allow for agent execution to be "randomized" within every step, a new
parameter has been added to the scheduler, which makes it add a random number to
the key in order to change the ordering.

`EventedAgent.received` now checks the messages before returning control to the
user by default.
2022-10-20 12:15:25 +02:00
J. Fernando Sánchez
cbbaf73538 Fix bug EventedEnvironment 2022-10-20 12:07:56 +02:00
J. Fernando Sánchez
2f5e5d0a74 Black formatting 2022-10-18 17:03:40 +02:00
48 changed files with 954 additions and 1387 deletions

View File

@@ -5,8 +5,46 @@ Learn how to run your own simulations with our [documentation](http://soilsim.re
Follow our [tutorial](examples/tutorial/soil_tutorial.ipynb) to develop your own agent models.
**Note**: Mesa 0.30 introduced many fundamental changes. Check the [documention on how to update your simulations to work with newer versions](docs/migration_0.30.rst)
# Changes in version 0.3
## SOIL vs MESA
SOIL is a batteries-included platform that builds on top of MESA and provides the following out of the box:
* Integration with (social) networks
* The ability to more easily assign agents to your model (and optionally to its network):
* Assigning agents to nodes, and vice versa
* Using a description (e.g., 2 agents of type `Foo`, 10% of the network should be agents of type `Bar`)
* **Several types of abstractions for agents**:
* Finite state machine, where methods can be turned into a state
* Network agents, which have convenience methods to access the model's topology
* Generator-based agents, whose state is paused though a `yield` and resumed on the next step
* **Reporting and data collection**:
* Soil models include data collection and record some data by default (# of agents, state of each agent, etc.)
* All data collected are exported by default to a SQLite database and a description file
* Options to export to other formats, such as CSV, or defining your own exporters
* A summary of the data collected is shown in the command line, for easy inspection
* **An event-based scheduler**
* Agents can be explicit about when their next time/step should be, and not all agents run in every step. This avoids unnecessary computation.
* Time intervals between each step are flexible.
* There are primitives to specify when the next execution of an agent should be (or conditions)
* **Actor-inspired** message-passing
* A simulation runner (`soil.Simulation`) that can:
* Run models in parallel
* Save results to different formats
* Simulation configuration files
* A command line interface (`soil`), to run multiple
* An integrated debugger (`soil --debug`) with custom functions to print agent states and break at specific states
Nevertheless, most features in SOIL have been designed to integrate with plain Mesa.
For instance, it should be possible to run a `mesa.Model` models using a `soil.Simulation` and the `soil` CLI, or to integrate the `soil.TimedActivation` scheduler on a `mesa.Model`.
Note that some combinations of `mesa` and `soil` components, while technically possible, are much less useful or even wrong.
For instance, you may add any `soil.agent` agent (except for the `soil.NetworkAgent`, as it needs a topology) on a regular `mesa.Model` with a vanilla scheduler from `mesa.time`.
But in that case the agents will not get any of the advanced event-based scheduling, and most agent behaviors that depend on that will greatly vary.
## Changes in version 0.3
Version 0.3 came packed with many changes to provide much better integration with MESA.
For a long time, we tried to keep soil backwards-compatible, but it turned out to be a big endeavour and the resulting code was less readable.
@@ -18,27 +56,6 @@ If you have an older Soil simulation, you have two options:
* Update the necessary configuration files and code. You may use the examples in the `examples` folder for reference, as well as the documentation.
* Keep using a previous `soil` version.
## Mesa compatibility
Soil is in the process of becoming fully compatible with MESA.
The idea is to provide a set of modular classes and functions that extend the functionality of mesa, whilst staying compatible.
In the end, it should be possible to add regular mesa agents to a soil simulation, or use a soil agent within a mesa simulation/model.
This is a non-exhaustive list of tasks to achieve compatibility:
- [ ] Integrate `soil.Simulation` with mesa's runners:
- [ ] `soil.Simulation` could mimic/become a `mesa.batchrunner`
- [ ] Integrate `soil.Environment` with `mesa.Model`:
- [x] `Soil.Environment` inherits from `mesa.Model`
- [x] `Soil.Environment` includes a Mesa-like Scheduler (see the `soil.time` module.
- [ ] Allow for `mesa.Model` to be used in a simulation.
- [ ] Integrate `soil.Agent` with `mesa.Agent`:
- [x] Rename agent.id to unique_id?
- [x] mesa agents can be used in soil simulations (see `examples/mesa`)
- [ ] Provide examples
- [ ] Using mesa modules in a soil simulation
- [ ] Using soil modules in a mesa simulation
- [ ] Document the new APIs and usage
## Citation

View File

@@ -1,262 +0,0 @@
Configuring a simulation
------------------------
There are two ways to configure a simulation: programmatically and with a configuration file.
In both cases, the parameters used are the same.
The advantage of a configuration file is that it is a clean declarative description, and it makes it easier to reproduce.
Simulation configuration files can be formatted in ``json`` or ``yaml`` and they define all the parameters of a simulation.
Here's an example (``example.yml``).
.. literalinclude:: example.yml
:language: yaml
This example configuration will run three trials (``num_trials``) of a simulation containing a randomly generated network (``network_params``).
The 100 nodes in the network will be SISaModel agents (``network_agents.agent_class``), which is an agent behavior that is included in Soil.
10% of the agents (``weight=1``) will start in the content state, 10% in the discontent state, and the remaining 80% (``weight=8``) in the neutral state.
All agents will have access to the environment (``environment_params``), which only contains one variable, ``prob_infected``.
The state of the agents will be updated every 2 seconds (``interval``).
Now run the simulation with the command line tool:
.. code:: bash
soil example.yml
Once the simulation finishes, its results will be stored in a folder named ``MyExampleSimulation``.
Three types of objects are saved by default: a pickle of the simulation; a ``YAML`` representation of the simulation (which can be used to re-launch it); and for every trial, a ``sqlite`` file with the content of the state of every network node and the environment parameters at every step of the simulation.
.. code::
soil_output
└── MyExampleSimulation
├── MyExampleSimulation.dumped.yml
├── MyExampleSimulation.simulation.pickle
├── MyExampleSimulation_trial_0.db.sqlite
├── MyExampleSimulation_trial_1.db.sqlite
└── MyExampleSimulation_trial_2.db.sqlite
You may also ask soil to export the states in a ``csv`` file, and the network in gephi format (``gexf``).
Network
=======
The network topology for the simulation can be loaded from an existing network file or generated with one of the random network generation methods from networkx.
Loading a network
#################
To load an existing network, specify its path in the configuration:
.. code:: yaml
---
network_params:
path: /tmp/mynetwork.gexf
Soil will try to guess what networkx method to use to read the file based on its extension.
However, we only test using ``gexf`` files.
For simple networks, you may also include them in the configuration itself using , using the ``topology`` parameter like so:
.. code:: yaml
---
topology:
nodes:
- id: First
- id: Second
links:
- source: First
target: Second
Generating a random network
###########################
To generate a random network using one of networkx's built-in methods, specify the `graph generation algorithm <https://networkx.github.io/documentation/development/reference/generators.html>`_ and other parameters.
For example, the following configuration is equivalent to :code:`nx.complete_graph(n=100)`:
.. code:: yaml
network_params:
generator: complete_graph
n: 100
Environment
============
The environment is the place where the shared state of the simulation is stored.
That means both global parameters, such as the probability of disease outbreak.
But it also means other data, such as a map, or a network topology that connects multiple agents.
As a result, it is also typical to add custom functions in an environment that help agents interact with each other and with the state of the simulation.
Last but not least, an environment controls when and how its agents will be executed.
By default, soil environments incorporate a ``soil.time.TimedActivation`` model for agent execution (more on this on the following section).
Soil environments are very similar, and often interchangeable with, mesa models (``mesa.Model``).
A configuration may specify the initial value of the environment parameters:
.. code:: yaml
environment_params:
daily_probability_of_earthquake: 0.001
number_of_earthquakes: 0
All agents have access to the environment (and its parameters).
In some scenarios, it is useful to have a custom environment, to provide additional methods or to control the way agents update environment state.
For example, if our agents play the lottery, the environment could provide a method to decide whether the agent wins, instead of leaving it to the agent.
Agents
======
Agents are a way of modelling behavior.
Agents can be characterized with two variables: agent type (``agent_class``) and state.
The agent type is a ``soil.Agent`` class, which contains the code that encapsulates the behavior of the agent.
The state is a set of variables, which may change during the simulation, and that the code may use to control the behavior.
All agents provide a ``step`` method either explicitly or implicitly (by inheriting it from a superclass), which controls how the agent will behave in each step of the simulation.
When and how agent steps are executed in a simulation depends entirely on the ``environment``.
Most environments will internally use a scheduler (``mesa.time.BaseScheduler``), which controls the activation of agents.
In soil, we generally used the ``soil.time.TimedActivation`` scheduler, which allows agents to specify when their next activation will happen, defaulting to a
When an agent's step is executed (generally, every ``interval`` seconds), the agent has access to its state and the environment.
Through the environment, it can access the network topology and the state of other agents.
There are two types of agents according to how they are added to the simulation: network agents and environment agent.
Network Agents
##############
Network agents are attached to a node in the topology.
The configuration file allows you to specify how agents will be mapped to topology nodes.
The simplest way is to specify a single type of agent.
Hence, every node in the network will be associated to an agent of that type.
.. code:: yaml
agent_class: SISaModel
It is also possible to add more than one type of agent to the simulation.
To control the ratio of each type (using the ``weight`` property).
For instance, with following configuration, it is five times more likely for a node to be assigned a CounterModel type than a SISaModel type.
.. code:: yaml
network_agents:
- agent_class: SISaModel
weight: 1
- agent_class: CounterModel
weight: 5
The third option is to specify the type of agent on the node itself, e.g.:
.. code:: yaml
topology:
nodes:
- id: first
agent_class: BaseAgent
states:
first:
agent_class: SISaModel
This would also work with a randomly generated network:
.. code:: yaml
network:
generator: complete
n: 5
agent_class: BaseAgent
states:
- agent_class: SISaModel
In addition to agent type, you may add a custom initial state to the distribution.
This is very useful to add the same agent type with different states.
e.g., to populate the network with SISaModel, roughly 10% of them with a discontent state:
.. code:: yaml
network_agents:
- agent_class: SISaModel
weight: 9
state:
id: neutral
- agent_class: SISaModel
weight: 1
state:
id: discontent
Lastly, the configuration may include initial state for one or more nodes.
For instance, to add a state for the two nodes in this configuration:
.. code:: yaml
agent_class: SISaModel
network:
generator: complete_graph
n: 2
states:
- id: content
- id: discontent
Or to add state only to specific nodes (by ``id``).
For example, to apply special skills to Linux Torvalds in a simulation:
.. literalinclude:: ../examples/torvalds.yml
:language: yaml
Environment Agents
##################
In addition to network agents, more agents can be added to the simulation.
These agents are programmed in much the same way as network agents, the only difference is that they will not be assigned to network nodes.
.. code::
environment_agents:
- agent_class: MyAgent
state:
mood: happy
- agent_class: DummyAgent
You may use environment agents to model events that a normal agent cannot control, such as natural disasters or chance.
They are also useful to add behavior that has little to do with the network and the interactions within that network.
Templating
==========
Sometimes, it is useful to parameterize a simulation and run it over a range of values in order to compare each run and measure the effect of those parameters in the simulation.
For instance, you may want to run a simulation with different agent distributions.
This can be done in Soil using **templates**.
A template is a configuration where some of the values are specified with a variable.
e.g., ``weight: "{{ var1 }}"`` instead of ``weight: 1``.
There are two types of variables, depending on how their values are decided:
* Fixed. A list of values is provided, and a new simulation is run for each possible value. If more than a variable is given, a new simulation will be run per combination of values.
* Bounded/Sampled. The bounds of the variable are provided, along with a sampler method, which will be used to compute all the configuration combinations.
When fixed and bounded variables are mixed, Soil generates a new configuration per combination of fixed values and bounded values.
Here is an example with a single fixed variable and two bounded variable:
.. literalinclude:: ../examples/template.yml
:language: yaml

View File

@@ -3,33 +3,38 @@ name: MyExampleSimulation
max_time: 50
num_trials: 3
interval: 2
network_params:
generator: barabasi_albert_graph
n: 100
m: 2
network_agents:
model_params:
topology:
params:
generator: barabasi_albert_graph
n: 100
m: 2
agents:
distribution:
- agent_class: SISaModel
weight: 1
topology: True
ratio: 0.1
state:
id: content
state_id: content
- agent_class: SISaModel
weight: 1
topology: True
ratio: .1
state:
id: discontent
state_id: discontent
- agent_class: SISaModel
weight: 8
topology: True
ratio: 0.8
state:
id: neutral
environment_params:
prob_infect: 0.075
neutral_discontent_spon_prob: 0.1
neutral_discontent_infected_prob: 0.3
neutral_content_spon_prob: 0.3
neutral_content_infected_prob: 0.4
discontent_neutral: 0.5
discontent_content: 0.5
variance_d_c: 0.2
content_discontent: 0.2
variance_c_d: 0.2
content_neutral: 0.2
standard_variance: 1
state_id: neutral
prob_infect: 0.075
neutral_discontent_spon_prob: 0.1
neutral_discontent_infected_prob: 0.3
neutral_content_spon_prob: 0.3
neutral_content_infected_prob: 0.4
discontent_neutral: 0.5
discontent_content: 0.5
variance_d_c: 0.2
content_discontent: 0.2
variance_c_d: 0.2
content_neutral: 0.2
standard_variance: 1

View File

@@ -1,8 +1,3 @@
.. Soil documentation master file, created by
sphinx-quickstart on Tue Apr 25 12:48:56 2017.
You can adapt this file completely to your liking, but it should at least
contain the root `toctree` directive.
Welcome to Soil's documentation!
================================

View File

@@ -14,6 +14,10 @@ Now test that it worked by running the command line tool
soil --help
#or
python -m soil --help
Or, if you're using using soil programmatically:
.. code:: python
@@ -21,4 +25,4 @@ Or, if you're using using soil programmatically:
import soil
print(soil.__version__)
The latest version can be installed through `GitLab <https://lab.gsi.upm.es/soil/soil.git>`_ or `GitHub <https://github.com/gsi-upm/soil>`_.
The latest version can be installed through `GitHub <https://github.com/gsi-upm/soil>`_ or `GitLab <https://lab.gsi.upm.es/soil/soil.git>`_.

View File

@@ -12,7 +12,7 @@ set BUILDDIR=_build
set SPHINXPROJ=Soil
if "%1" == "" goto help
eE
%SPHINXBUILD% >NUL 2>NUL
if errorlevel 9009 (
echo.

22
docs/mesa.rst Normal file
View File

@@ -0,0 +1,22 @@
Mesa compatibility
------------------
Soil is in the process of becoming fully compatible with MESA.
The idea is to provide a set of modular classes and functions that extend the functionality of mesa, whilst staying compatible.
In the end, it should be possible to add regular mesa agents to a soil simulation, or use a soil agent within a mesa simulation/model.
This is a non-exhaustive list of tasks to achieve compatibility:
- [ ] Integrate `soil.Simulation` with mesa's runners:
- [ ] `soil.Simulation` could mimic/become a `mesa.batchrunner`
- [ ] Integrate `soil.Environment` with `mesa.Model`:
- [x] `Soil.Environment` inherits from `mesa.Model`
- [x] `Soil.Environment` includes a Mesa-like Scheduler (see the `soil.time` module.
- [ ] Allow for `mesa.Model` to be used in a simulation.
- [ ] Integrate `soil.Agent` with `mesa.Agent`:
- [x] Rename agent.id to unique_id?
- [x] mesa agents can be used in soil simulations (see `examples/mesa`)
- [ ] Provide examples
- [ ] Using mesa modules in a soil simulation
- [ ] Using soil modules in a mesa simulation
- [ ] Document the new APIs and usage

View File

@@ -2,29 +2,32 @@
name: quickstart
num_trials: 1
max_time: 1000
network_agents:
- agent_class: SISaModel
state:
id: neutral
weight: 1
- agent_class: SISaModel
state:
id: content
weight: 2
network_params:
n: 100
k: 5
p: 0.2
generator: newman_watts_strogatz_graph
environment_params:
neutral_discontent_spon_prob: 0.05
neutral_discontent_infected_prob: 0.1
neutral_content_spon_prob: 0.2
neutral_content_infected_prob: 0.4
discontent_neutral: 0.2
discontent_content: 0.05
content_discontent: 0.05
variance_d_c: 0.05
variance_c_d: 0.1
content_neutral: 0.1
standard_variance: 0.1
model_params:
agents:
- agent_class: SISaModel
topology: true
state:
id: neutral
weight: 1
- agent_class: SISaModel
topology: true
state:
id: content
weight: 2
topology:
params:
n: 100
k: 5
p: 0.2
generator: newman_watts_strogatz_graph
neutral_discontent_spon_prob: 0.05
neutral_discontent_infected_prob: 0.1
neutral_content_spon_prob: 0.2
neutral_content_infected_prob: 0.4
discontent_neutral: 0.2
discontent_content: 0.05
content_discontent: 0.05
variance_d_c: 0.05
variance_c_d: 0.1
content_neutral: 0.1
standard_variance: 0.1

View File

@@ -115,13 +115,13 @@ Here's the code:
@soil.agents.state
def neutral(self):
r = random.random()
if self['has_tv'] and r < self.env['prob_tv_spread']:
if self['has_tv'] and r < self.model['prob_tv_spread']:
return self.infected
return
@soil.agents.state
def infected(self):
prob_infect = self.env['prob_neighbor_spread']
prob_infect = self.model['prob_neighbor_spread']
for neighbor in self.get_neighboring_agents(state_id=self.neutral.id):
r = random.random()
if r < prob_infect:
@@ -146,11 +146,11 @@ spreading the rumor.
class NewsEnvironmentAgent(soil.agents.BaseAgent):
def step(self):
if self.now == self['event_time']:
self.env['prob_tv_spread'] = 1
self.env['prob_neighbor_spread'] = 1
self.model['prob_tv_spread'] = 1
self.model['prob_neighbor_spread'] = 1
elif self.now > self['event_time']:
self.env['prob_tv_spread'] = self.env['prob_tv_spread'] * TV_FACTOR
self.env['prob_neighbor_spread'] = self.env['prob_neighbor_spread'] * NEIGHBOR_FACTOR
self.model['prob_tv_spread'] = self.model['prob_tv_spread'] * TV_FACTOR
self.model['prob_neighbor_spread'] = self.model['prob_neighbor_spread'] * NEIGHBOR_FACTOR
Testing the agents
~~~~~~~~~~~~~~~~~~

View File

@@ -1,4 +1,5 @@
from soil.agents import FSM, state, default_state
from soil.time import Delta
class Fibonacci(FSM):
@@ -11,7 +12,7 @@ class Fibonacci(FSM):
def counting(self):
self.log("Stopping at {}".format(self.now))
prev, self["prev"] = self["prev"], max([self.now, self["prev"]])
return None, self.env.timeout(prev)
return None, Delta(prev)
class Odds(FSM):
@@ -21,18 +22,26 @@ class Odds(FSM):
@state
def odds(self):
self.log("Stopping at {}".format(self.now))
return None, self.env.timeout(1 + self.now % 2)
return None, Delta(1 + self.now % 2)
from soil import Simulation
simulation = Simulation(
model_params={
'agents':[
{'agent_class': Fibonacci, 'node_id': 0},
{'agent_class': Odds, 'node_id': 1}
],
'topology': {
'params': {
'generator': 'complete_graph',
'n': 2
}
},
},
max_time=100,
)
if __name__ == "__main__":
from soil import Simulation
s = Simulation(
network_agents=[
{"ids": [0], "agent_class": Fibonacci},
{"ids": [1], "agent_class": Odds},
],
network_params={"generator": "complete_graph", "n": 2},
max_time=100,
)
s.run(dry_run=True)
simulation.run(dry_run=True)

View File

@@ -18,6 +18,7 @@ An example scenario could play like the following:
- If there are no more passengers available in the simulation, Drivers die
"""
from __future__ import annotations
from typing import Optional
from soil import *
from soil import events
from mesa.space import MultiGrid
@@ -28,17 +29,18 @@ from mesa.space import MultiGrid
@dataclass
class Journey:
"""
This represents a request for a journey. Passengers and drivers exchange this object.
This represents a request for a journey. Passengers and drivers exchange this object.
A journey may have a driver assigned or not. If the driver has not been assigned, this
object is considered a "request for a journey".
"""
origin: (int, int)
destination: (int, int)
tip: float
passenger: Passenger
driver: Driver = None
driver: Optional[Driver] = None
class City(EventedEnvironment):
@@ -54,20 +56,33 @@ class City(EventedEnvironment):
:param int height: Height of the internal grid
:param int width: Width of the internal grid
"""
def __init__(self, *args, n_cars=1, n_passengers=10,
height=100, width=100, agents=None,
model_reporters=None,
**kwargs):
def __init__(
self,
*args,
n_cars=1,
n_passengers=10,
height=100,
width=100,
agents=None,
model_reporters=None,
**kwargs,
):
self.grid = MultiGrid(width=width, height=height, torus=False)
if agents is None:
agents = []
for i in range(n_cars):
agents.append({'agent_class': Driver})
agents.append({"agent_class": Driver})
for i in range(n_passengers):
agents.append({'agent_class': Passenger})
model_reporters = model_reporters or {'earnings': 'total_earnings', 'n_passengers': 'number_passengers'}
print('REPORTERS', model_reporters)
super().__init__(*args, agents=agents, model_reporters=model_reporters, **kwargs)
agents.append({"agent_class": Passenger})
model_reporters = model_reporters or {
"earnings": "total_earnings",
"n_passengers": "number_passengers",
}
print("REPORTERS", model_reporters)
super().__init__(
*args, agents=agents, model_reporters=model_reporters, **kwargs
)
for agent in self.agents:
self.grid.place_agent(agent, (0, 0))
self.grid.move_to_empty(agent)
@@ -87,13 +102,13 @@ class Driver(Evented, FSM):
earnings = 0
def on_receive(self, msg, sender):
'''This is not a state. It will run (and block) every time check_messages is invoked'''
"""This is not a state. It will run (and block) every time check_messages is invoked"""
if self.journey is None and isinstance(msg, Journey) and msg.driver is None:
msg.driver = self
self.journey = msg
def check_passengers(self):
'''If there are no more passengers, stop forever'''
"""If there are no more passengers, stop forever"""
c = self.count_agents(agent_class=Passenger)
self.info(f"Passengers left {c}")
if not c:
@@ -102,17 +117,20 @@ class Driver(Evented, FSM):
@default_state
@state
def wandering(self):
'''Move around the city until a journey is accepted'''
"""Move around the city until a journey is accepted"""
target = None
self.check_passengers()
self.journey = None
while self.journey is None: # No potential journeys detected (see on_receive)
if target is None or not self.move_towards(target):
target = self.random.choice(self.model.grid.get_neighborhood(self.pos, moore=False))
target = self.random.choice(
self.model.grid.get_neighborhood(self.pos, moore=False)
)
self.check_passengers()
self.check_messages() # This will call on_receive behind the scenes, and the agent's status will be updated
yield Delta(30) # Wait at least 30 seconds before checking again
# This will call on_receive behind the scenes, and the agent's status will be updated
self.check_messages()
yield Delta(30) # Wait at least 30 seconds before checking again
try:
# Re-send the journey to the passenger, to confirm that we have been selected
@@ -126,7 +144,7 @@ class Driver(Evented, FSM):
@state
def driving(self):
'''The journey has been accepted. Pick them up and take them to their destination'''
"""The journey has been accepted. Pick them up and take them to their destination"""
while self.move_towards(self.journey.origin):
yield
while self.move_towards(self.journey.destination, with_passenger=True):
@@ -136,7 +154,7 @@ class Driver(Evented, FSM):
return self.wandering
def move_towards(self, target, with_passenger=False):
'''Move one cell at a time towards a target'''
"""Move one cell at a time towards a target"""
self.info(f"Moving { self.pos } -> { target }")
if target[0] == self.pos[0] and target[1] == self.pos[1]:
return False
@@ -151,30 +169,36 @@ class Driver(Evented, FSM):
break
self.model.grid.move_agent(self, tuple(next_pos))
if with_passenger:
self.journey.passenger.pos = self.pos # This could be communicated through messages
self.journey.passenger.pos = (
self.pos
) # This could be communicated through messages
return True
class Passenger(Evented, FSM):
pos = None
def on_receive(self, msg, sender):
'''This is not a state. It will be run synchronously every time `check_messages` is run'''
"""This is not a state. It will be run synchronously every time `check_messages` is run"""
if isinstance(msg, Journey):
self.journey = msg
return msg
@default_state
@state
def asking(self):
destination = (self.random.randint(0, self.model.grid.height), self.random.randint(0, self.model.grid.width))
destination = (
self.random.randint(0, self.model.grid.height),
self.random.randint(0, self.model.grid.width),
)
self.journey = None
journey = Journey(origin=self.pos,
destination=destination,
tip=self.random.randint(10, 100),
passenger=self)
journey = Journey(
origin=self.pos,
destination=destination,
tip=self.random.randint(10, 100),
passenger=self,
)
timeout = 60
expiration = self.now + timeout
@@ -182,24 +206,38 @@ class Passenger(Evented, FSM):
while not self.journey:
self.info(f"Passenger at: { self.pos }. Checking for responses.")
try:
# This will call check_messages behind the scenes, and the agent's status will be updated
# If you want to avoid that, you can call it with: check=False
yield self.received(expiration=expiration)
except events.TimedOut:
self.info(f"Passenger at: { self.pos }. Asking for journey.")
self.model.broadcast(journey, ttl=timeout, sender=self, agent_class=Driver)
self.model.broadcast(
journey, ttl=timeout, sender=self, agent_class=Driver
)
expiration = self.now + timeout
self.check_messages()
return self.driving_home
@state
def driving_home(self):
while self.pos[0] != self.journey.destination[0] or self.pos[1] != self.journey.destination[1]:
yield self.received(timeout=60)
while (
self.pos[0] != self.journey.destination[0]
or self.pos[1] != self.journey.destination[1]
):
try:
yield self.received(timeout=60)
except events.TimedOut:
pass
self.info("Got home safe!")
self.die()
simulation = Simulation(name='RideHailing', model_class=City, model_params={'n_passengers': 2})
simulation = Simulation(
name="RideHailing",
model_class=City,
model_params={"n_passengers": 2},
seed="carsSeed",
)
if __name__ == "__main__":
with easy(simulation) as s:
s.run()
simulation.run()

View File

@@ -111,4 +111,5 @@ server = ModularServer(
)
server.port = 8521
server.launch(open_browser=False)
if __name__ == '__main__':
server.launch(open_browser=False)

View File

@@ -28,7 +28,7 @@ class MoneyAgent(MesaAgent):
It will only share wealth with neighbors based on grid proximity
"""
def __init__(self, unique_id, model, wealth=1):
def __init__(self, unique_id, model, wealth=1, **kwargs):
super().__init__(unique_id=unique_id, model=model)
self.wealth = wealth

View File

@@ -10,32 +10,48 @@ def mygenerator():
# Add only a node
G = Graph()
G.add_node(1)
G.add_node(2)
return G
class MyAgent(agents.FSM):
times_run = 0
@agents.default_state
@agents.state
def neutral(self):
self.debug("I am running")
if agents.prob(0.2):
if self.prob(0.2):
self.times_run += 1
self.info("This runs 2/10 times on average")
s = Simulation(
simulation = Simulation(
name="Programmatic",
network_params={"generator": mygenerator},
model_params={
'topology': {
'params': {
'generator': mygenerator
},
},
'agents': {
'distribution': [{
'agent_class': MyAgent,
'topology': True,
}]
}
},
seed='Program',
agent_reporters={'times_run': 'times_run'},
num_trials=1,
max_time=100,
agent_class=MyAgent,
dry_run=True,
)
if __name__ == "__main__":
# By default, logging will only print WARNING logs (and above).
# You need to choose a lower logging level to get INFO/DEBUG traces
logging.basicConfig(level=logging.INFO)
envs = simulation.run()
# By default, logging will only print WARNING logs (and above).
# You need to choose a lower logging level to get INFO/DEBUG traces
logging.basicConfig(level=logging.INFO)
envs = s.run()
# Uncomment this to output the simulation to a YAML file
# s.dump_yaml('simulation.yaml')
for agent in envs[0].agents:
print(agent.times_run)

View File

@@ -170,6 +170,6 @@ class Police(FSM):
if __name__ == "__main__":
from soil import simulation
from soil import run_from_config
simulation.run_from_config("pubcrawl.yml", dry_run=True, dump=None, parallel=False)
run_from_config("pubcrawl.yml", dry_run=True, dump=None, parallel=False)

View File

@@ -5,6 +5,8 @@ import math
class RabbitEnv(Environment):
prob_death = 1e-100
@property
def num_rabbits(self):
return self.count_agents(agent_class=Rabbit)
@@ -129,11 +131,11 @@ class RandomAccident(BaseAgent):
if not rabbits_alive:
return self.die()
prob_death = self.model.get("prob_death", 1e-100) * math.floor(
prob_death = self.model.prob_death * math.floor(
math.log10(max(1, rabbits_alive))
)
self.debug("Killing some rabbits with prob={}!".format(prob_death))
for i in self.iter_agents(agent_class=Rabbit):
for i in self.get_agents(agent_class=Rabbit):
if i.state_id == i.dead.id:
continue
if self.prob(prob_death):

View File

@@ -31,11 +31,11 @@ class MyAgent(agents.FSM):
s = Simulation(
name="Programmatic",
network_agents=[{"agent_class": MyAgent, "id": 0}],
topology={"nodes": [{"id": 0}], "links": []},
model_params={
'agents': [{'agent_class': MyAgent}],
},
num_trials=1,
max_time=100,
agent_class=MyAgent,
dry_run=True,
)

View File

@@ -108,14 +108,14 @@ class TerroristSpreadModel(FSM, Geo):
return
return self.leader
def ego_search(self, steps=1, center=False, node=None, **kwargs):
def ego_search(self, steps=1, center=False, agent=None, **kwargs):
"""Get a list of nodes in the ego network of *node* of radius *steps*"""
node = as_node(node if node is not None else self)
node = agent.node
G = self.subgraph(**kwargs)
return nx.ego_graph(G, node, center=center, radius=steps).nodes()
def degree(self, node, force=False):
node = as_node(node)
def degree(self, agent, force=False):
node = agent.node
if (
force
or (not hasattr(self.model, "_degree"))
@@ -125,8 +125,8 @@ class TerroristSpreadModel(FSM, Geo):
self.model._last_step = self.now
return self.model._degree[node]
def betweenness(self, node, force=False):
node = as_node(node)
def betweenness(self, agent, force=False):
node = agent.node
if (
force
or (not hasattr(self.model, "_betweenness"))
@@ -258,9 +258,7 @@ class TerroristNetworkModel(TerroristSpreadModel):
)
neighbours = set(
agent.id
for agent in self.get_neighbors(
agent_class=TerroristNetworkModel
)
for agent in self.get_neighbors(agent_class=TerroristNetworkModel)
)
search = (close_ups | step_neighbours) - neighbours
for agent in self.get_agents(search):

View File

@@ -216,13 +216,13 @@
" @soil.agents.state\n",
" def neutral(self):\n",
" r = random.random()\n",
" if self['has_tv'] and r < self.env['prob_tv_spread']:\n",
" if self['has_tv'] and r < self.model['prob_tv_spread']:\n",
" return self.infected\n",
" return\n",
" \n",
" @soil.agents.state\n",
" def infected(self):\n",
" prob_infect = self.env['prob_neighbor_spread']\n",
" prob_infect = self.model['prob_neighbor_spread']\n",
" for neighbor in self.get_neighboring_agents(state_id=self.neutral.id):\n",
" r = random.random()\n",
" if r < prob_infect:\n",
@@ -271,11 +271,11 @@
"class NewsEnvironmentAgent(soil.agents.NetworkAgent):\n",
" def step(self):\n",
" if self.now == self['event_time']:\n",
" self.env['prob_tv_spread'] = 1\n",
" self.env['prob_neighbor_spread'] = 1\n",
" self.model['prob_tv_spread'] = 1\n",
" self.model['prob_neighbor_spread'] = 1\n",
" elif self.now > self['event_time']:\n",
" self.env['prob_tv_spread'] = self.env['prob_tv_spread'] * TV_FACTOR\n",
" self.env['prob_neighbor_spread'] = self.env['prob_neighbor_spread'] * NEIGHBOR_FACTOR"
" self.model['prob_tv_spread'] = self.model['prob_tv_spread'] * TV_FACTOR\n",
" self.model['prob_neighbor_spread'] = self.model['prob_neighbor_spread'] * NEIGHBOR_FACTOR"
]
},
{

View File

@@ -1 +1 @@
0.30.0rc2
0.30.0rc4

View File

@@ -1,6 +1,7 @@
from __future__ import annotations
import importlib
from importlib.resources import path
import sys
import os
import logging
@@ -14,10 +15,12 @@ try:
except NameError:
basestring = str
from pathlib import Path
from .agents import *
from . import agents
from .simulation import *
from .environment import Environment, EventedEnvironment
from .datacollection import SoilCollector
from . import serialization
from .utils import logger
from .time import *
@@ -35,8 +38,10 @@ def main(
**kwargs,
):
sim = None
if isinstance(cfg, Simulation):
sim = cfg
import argparse
from . import simulation
@@ -47,7 +52,7 @@ def main(
"file",
type=str,
nargs="?",
default=cfg if sim is None else '',
default=cfg if sim is None else "",
help="Configuration file for the simulation (e.g., YAML or JSON)",
)
parser.add_argument(
@@ -169,22 +174,26 @@ def main(
sim.exporters = exporters
sim.parallel = parallel
sim.outdir = output
sims = [sim, ]
sims = [
sim,
]
else:
logger.info("Loading config file: {}".format(args.file))
if not os.path.exists(args.file):
logger.error("Please, input a valid file")
return
sims = list(simulation.iter_from_config(
args.file,
dry_run=args.dry_run,
exporters=exporters,
parallel=parallel,
outdir=output,
exporter_params=exp_params,
**kwargs,
))
sims = list(
simulation.iter_from_config(
args.file,
dry_run=args.dry_run,
exporters=exporters,
parallel=parallel,
outdir=output,
exporter_params=exp_params,
**kwargs,
)
)
for sim in sims:

View File

@@ -22,10 +22,10 @@ class BassModel(FSM):
else:
aware_neighbors = self.get_neighbors(state_id=self.aware.id)
num_neighbors_aware = len(aware_neighbors)
if self.prob((self["imitation_prob"] * num_neighbors_aware)):
if self.prob((self.imitation_prob * num_neighbors_aware)):
self.sentimentCorrelation = 1
return self.aware
@state
def aware(self):
self.die()
self.die()

View File

@@ -1,118 +0,0 @@
from . import FSM, state, default_state
class BigMarketModel(FSM):
"""
Settings:
Names:
enterprises [Array]
tweet_probability_enterprises [Array]
Users:
tweet_probability_users
tweet_relevant_probability
tweet_probability_about [Array]
sentiment_about [Array]
"""
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.enterprises = self.env.environment_params["enterprises"]
self.type = ""
if self.id < len(self.enterprises): # Enterprises
self._set_state(self.enterprise.id)
self.type = "Enterprise"
self.tweet_probability = environment.environment_params[
"tweet_probability_enterprises"
][self.id]
else: # normal users
self.type = "User"
self._set_state(self.user.id)
self.tweet_probability = environment.environment_params[
"tweet_probability_users"
]
self.tweet_relevant_probability = environment.environment_params[
"tweet_relevant_probability"
]
self.tweet_probability_about = environment.environment_params[
"tweet_probability_about"
] # List
self.sentiment_about = environment.environment_params[
"sentiment_about"
] # List
@state
def enterprise(self):
if self.random.random() < self.tweet_probability: # Tweets
aware_neighbors = self.get_neighbors(
state_id=self.number_of_enterprises
) # Nodes neighbour users
for x in aware_neighbors:
if self.random.uniform(0, 10) < 5:
x.sentiment_about[self.id] += 0.1 # Increments for enterprise
else:
x.sentiment_about[self.id] -= 0.1 # Decrements for enterprise
# Establecemos limites
if x.sentiment_about[self.id] > 1:
x.sentiment_about[self.id] = 1
if x.sentiment_about[self.id] < -1:
x.sentiment_about[self.id] = -1
x.attrs[
"sentiment_enterprise_%s" % self.enterprises[self.id]
] = x.sentiment_about[self.id]
@state
def user(self):
if self.random.random() < self.tweet_probability: # Tweets
if (
self.random.random() < self.tweet_relevant_probability
): # Tweets something relevant
# Tweet probability per enterprise
for i in range(len(self.enterprises)):
random_num = self.random.random()
if random_num < self.tweet_probability_about[i]:
# The condition is fulfilled, sentiments are evaluated towards that enterprise
if self.sentiment_about[i] < 0:
# NEGATIVO
self.userTweets("negative", i)
elif self.sentiment_about[i] == 0:
# NEUTRO
pass
else:
# POSITIVO
self.userTweets("positive", i)
for i in range(
len(self.enterprises)
): # So that it never is set to 0 if there are not changes (logs)
self.attrs[
"sentiment_enterprise_%s" % self.enterprises[i]
] = self.sentiment_about[i]
def userTweets(self, sentiment, enterprise):
aware_neighbors = self.get_neighbors(
state_id=self.number_of_enterprises
) # Nodes neighbours users
for x in aware_neighbors:
if sentiment == "positive":
x.sentiment_about[enterprise] += 0.003
elif sentiment == "negative":
x.sentiment_about[enterprise] -= 0.003
else:
pass
# Establecemos limites
if x.sentiment_about[enterprise] > 1:
x.sentiment_about[enterprise] = 1
if x.sentiment_about[enterprise] < -1:
x.sentiment_about[enterprise] = -1
x.attrs[
"sentiment_enterprise_%s" % self.enterprises[enterprise]
] = x.sentiment_about[enterprise]

View File

@@ -1,14 +1,14 @@
from scipy.spatial import cKDTree as KDTree
import networkx as nx
from . import NetworkAgent, as_node
from . import NetworkAgent
class Geo(NetworkAgent):
"""In this type of network, nodes have a "pos" attribute."""
def geo_search(self, radius, node=None, center=False, **kwargs):
def geo_search(self, radius, agent=None, center=False, **kwargs):
"""Get a list of nodes whose coordinates are closer than *radius* to *node*."""
node = as_node(node if node is not None else self)
node = agent.node
G = self.subgraph(**kwargs)
@@ -18,4 +18,4 @@ class Geo(NetworkAgent):
nodes, coords = list(zip(*pos.items()))
kdtree = KDTree(coords) # Cannot provide generator.
indices = kdtree.query_ball_point(pos[node], radius)
return [nodes[i] for i in indices if center or (nodes[i] != node)]
return [nodes[i] for i in indices if center or (nodes[i] != node)]

View File

@@ -1,7 +1,7 @@
from . import BaseAgent
from . import Agent, state, default_state
class IndependentCascadeModel(BaseAgent):
class IndependentCascadeModel(Agent):
"""
Settings:
innovation_prob
@@ -9,42 +9,22 @@ class IndependentCascadeModel(BaseAgent):
imitation_prob
"""
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.innovation_prob = self.env.environment_params["innovation_prob"]
self.imitation_prob = self.env.environment_params["imitation_prob"]
self.state["time_awareness"] = 0
self.state["sentimentCorrelation"] = 0
time_awareness = 0
sentimentCorrelation = 0
def step(self):
self.behaviour()
# Outside effects
@default_state
@state
def outside(self):
if self.prob(self.model.innovation_prob):
self.sentimentCorrelation = 1
self.time_awareness = self.model.now # To know when they have been infected
return self.imitate
def behaviour(self):
aware_neighbors_1_time_step = []
# Outside effects
if self.prob(self.innovation_prob):
if self.state["id"] == 0:
self.state["id"] = 1
self.state["sentimentCorrelation"] = 1
self.state[
"time_awareness"
] = self.env.now # To know when they have been infected
else:
pass
@state
def imitate(self):
aware_neighbors = self.get_neighbors(state_id=1, time_awareness=self.now-1)
return
# Imitation effects
if self.state["id"] == 0:
aware_neighbors = self.get_neighbors(state_id=1)
for x in aware_neighbors:
if x.state["time_awareness"] == (self.env.now - 1):
aware_neighbors_1_time_step.append(x)
num_neighbors_aware = len(aware_neighbors_1_time_step)
if self.prob(self.imitation_prob * num_neighbors_aware):
self.state["id"] = 1
self.state["sentimentCorrelation"] = 1
else:
pass
return
if self.prob(self.model.imitation_prob * len(aware_neighbors)):
self.sentimentCorrelation = 1
return self.outside

View File

@@ -1,270 +0,0 @@
import numpy as np
from . import BaseAgent
class SpreadModelM2(BaseAgent):
"""
Settings:
prob_neutral_making_denier
prob_infect
prob_cured_healing_infected
prob_cured_vaccinate_neutral
prob_vaccinated_healing_infected
prob_vaccinated_vaccinate_neutral
prob_generate_anti_rumor
"""
def __init__(self, model=None, unique_id=0, state=()):
super().__init__(model=environment, unique_id=unique_id, state=state)
# Use a single generator with the same seed as `self.random`
random = np.random.default_rng(seed=self._seed)
self.prob_neutral_making_denier = random.normal(
environment.environment_params["prob_neutral_making_denier"],
environment.environment_params["standard_variance"],
)
self.prob_infect = random.normal(
environment.environment_params["prob_infect"],
environment.environment_params["standard_variance"],
)
self.prob_cured_healing_infected = random.normal(
environment.environment_params["prob_cured_healing_infected"],
environment.environment_params["standard_variance"],
)
self.prob_cured_vaccinate_neutral = random.normal(
environment.environment_params["prob_cured_vaccinate_neutral"],
environment.environment_params["standard_variance"],
)
self.prob_vaccinated_healing_infected = random.normal(
environment.environment_params["prob_vaccinated_healing_infected"],
environment.environment_params["standard_variance"],
)
self.prob_vaccinated_vaccinate_neutral = random.normal(
environment.environment_params["prob_vaccinated_vaccinate_neutral"],
environment.environment_params["standard_variance"],
)
self.prob_generate_anti_rumor = random.normal(
environment.environment_params["prob_generate_anti_rumor"],
environment.environment_params["standard_variance"],
)
def step(self):
if self.state["id"] == 0: # Neutral
self.neutral_behaviour()
elif self.state["id"] == 1: # Infected
self.infected_behaviour()
elif self.state["id"] == 2: # Cured
self.cured_behaviour()
elif self.state["id"] == 3: # Vaccinated
self.vaccinated_behaviour()
def neutral_behaviour(self):
# Infected
infected_neighbors = self.get_neighbors(state_id=1)
if len(infected_neighbors) > 0:
if self.prob(self.prob_neutral_making_denier):
self.state["id"] = 3 # Vaccinated making denier
def infected_behaviour(self):
# Neutral
neutral_neighbors = self.get_neighbors(state_id=0)
for neighbor in neutral_neighbors:
if self.prob(self.prob_infect):
neighbor.state["id"] = 1 # Infected
def cured_behaviour(self):
# Vaccinate
neutral_neighbors = self.get_neighbors(state_id=0)
for neighbor in neutral_neighbors:
if self.prob(self.prob_cured_vaccinate_neutral):
neighbor.state["id"] = 3 # Vaccinated
# Cure
infected_neighbors = self.get_neighbors(state_id=1)
for neighbor in infected_neighbors:
if self.prob(self.prob_cured_healing_infected):
neighbor.state["id"] = 2 # Cured
def vaccinated_behaviour(self):
# Cure
infected_neighbors = self.get_neighbors(state_id=1)
for neighbor in infected_neighbors:
if self.prob(self.prob_cured_healing_infected):
neighbor.state["id"] = 2 # Cured
# Vaccinate
neutral_neighbors = self.get_neighbors(state_id=0)
for neighbor in neutral_neighbors:
if self.prob(self.prob_cured_vaccinate_neutral):
neighbor.state["id"] = 3 # Vaccinated
# Generate anti-rumor
infected_neighbors_2 = self.get_neighbors(state_id=1)
for neighbor in infected_neighbors_2:
if self.prob(self.prob_generate_anti_rumor):
neighbor.state["id"] = 2 # Cured
class ControlModelM2(BaseAgent):
"""
Settings:
prob_neutral_making_denier
prob_infect
prob_cured_healing_infected
prob_cured_vaccinate_neutral
prob_vaccinated_healing_infected
prob_vaccinated_vaccinate_neutral
prob_generate_anti_rumor
"""
def __init__(self, model=None, unique_id=0, state=()):
super().__init__(model=environment, unique_id=unique_id, state=state)
self.prob_neutral_making_denier = np.random.normal(
environment.environment_params["prob_neutral_making_denier"],
environment.environment_params["standard_variance"],
)
self.prob_infect = np.random.normal(
environment.environment_params["prob_infect"],
environment.environment_params["standard_variance"],
)
self.prob_cured_healing_infected = np.random.normal(
environment.environment_params["prob_cured_healing_infected"],
environment.environment_params["standard_variance"],
)
self.prob_cured_vaccinate_neutral = np.random.normal(
environment.environment_params["prob_cured_vaccinate_neutral"],
environment.environment_params["standard_variance"],
)
self.prob_vaccinated_healing_infected = np.random.normal(
environment.environment_params["prob_vaccinated_healing_infected"],
environment.environment_params["standard_variance"],
)
self.prob_vaccinated_vaccinate_neutral = np.random.normal(
environment.environment_params["prob_vaccinated_vaccinate_neutral"],
environment.environment_params["standard_variance"],
)
self.prob_generate_anti_rumor = np.random.normal(
environment.environment_params["prob_generate_anti_rumor"],
environment.environment_params["standard_variance"],
)
def step(self):
if self.state["id"] == 0: # Neutral
self.neutral_behaviour()
elif self.state["id"] == 1: # Infected
self.infected_behaviour()
elif self.state["id"] == 2: # Cured
self.cured_behaviour()
elif self.state["id"] == 3: # Vaccinated
self.vaccinated_behaviour()
elif self.state["id"] == 4: # Beacon-off
self.beacon_off_behaviour()
elif self.state["id"] == 5: # Beacon-on
self.beacon_on_behaviour()
def neutral_behaviour(self):
self.state["visible"] = False
# Infected
infected_neighbors = self.get_neighbors(state_id=1)
if len(infected_neighbors) > 0:
if self.random(self.prob_neutral_making_denier):
self.state["id"] = 3 # Vaccinated making denier
def infected_behaviour(self):
# Neutral
neutral_neighbors = self.get_neighbors(state_id=0)
for neighbor in neutral_neighbors:
if self.prob(self.prob_infect):
neighbor.state["id"] = 1 # Infected
self.state["visible"] = False
def cured_behaviour(self):
self.state["visible"] = True
# Vaccinate
neutral_neighbors = self.get_neighbors(state_id=0)
for neighbor in neutral_neighbors:
if self.prob(self.prob_cured_vaccinate_neutral):
neighbor.state["id"] = 3 # Vaccinated
# Cure
infected_neighbors = self.get_neighbors(state_id=1)
for neighbor in infected_neighbors:
if self.prob(self.prob_cured_healing_infected):
neighbor.state["id"] = 2 # Cured
def vaccinated_behaviour(self):
self.state["visible"] = True
# Cure
infected_neighbors = self.get_neighbors(state_id=1)
for neighbor in infected_neighbors:
if self.prob(self.prob_cured_healing_infected):
neighbor.state["id"] = 2 # Cured
# Vaccinate
neutral_neighbors = self.get_neighbors(state_id=0)
for neighbor in neutral_neighbors:
if self.prob(self.prob_cured_vaccinate_neutral):
neighbor.state["id"] = 3 # Vaccinated
# Generate anti-rumor
infected_neighbors_2 = self.get_neighbors(state_id=1)
for neighbor in infected_neighbors_2:
if self.prob(self.prob_generate_anti_rumor):
neighbor.state["id"] = 2 # Cured
def beacon_off_behaviour(self):
self.state["visible"] = False
infected_neighbors = self.get_neighbors(state_id=1)
if len(infected_neighbors) > 0:
self.state["id"] == 5 # Beacon on
def beacon_on_behaviour(self):
self.state["visible"] = False
# Cure (M2 feature added)
infected_neighbors = self.get_neighbors(state_id=1)
for neighbor in infected_neighbors:
if self.prob(self.prob_generate_anti_rumor):
neighbor.state["id"] = 2 # Cured
neutral_neighbors_infected = neighbor.get_neighbors(state_id=0)
for neighbor in neutral_neighbors_infected:
if self.prob(self.prob_generate_anti_rumor):
neighbor.state["id"] = 3 # Vaccinated
infected_neighbors_infected = neighbor.get_neighbors(state_id=1)
for neighbor in infected_neighbors_infected:
if self.prob(self.prob_generate_anti_rumor):
neighbor.state["id"] = 2 # Cured
# Vaccinate
neutral_neighbors = self.get_neighbors(state_id=0)
for neighbor in neutral_neighbors:
if self.prob(self.prob_cured_vaccinate_neutral):
neighbor.state["id"] = 3 # Vaccinated

View File

@@ -1,8 +1,9 @@
import numpy as np
from . import FSM, state
from hashlib import sha512
from . import Agent, state, default_state
class SISaModel(FSM):
class SISaModel(Agent):
"""
Settings:
neutral_discontent_spon_prob
@@ -28,38 +29,45 @@ class SISaModel(FSM):
standard_variance
"""
def __init__(self, environment, unique_id=0, state=()):
super().__init__(model=environment, unique_id=unique_id, state=state)
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
random = np.random.default_rng(seed=self._seed)
seed = self.model._seed
if isinstance(seed, (str, bytes, bytearray)):
if isinstance(seed, str):
seed = seed.encode()
seed = int.from_bytes(seed + sha512(seed).digest(), 'big')
random = np.random.default_rng(seed=seed)
self.neutral_discontent_spon_prob = random.normal(
self.env["neutral_discontent_spon_prob"], self.env["standard_variance"]
self.model.neutral_discontent_spon_prob, self.model.standard_variance
)
self.neutral_discontent_infected_prob = random.normal(
self.env["neutral_discontent_infected_prob"], self.env["standard_variance"]
self.model.neutral_discontent_infected_prob, self.model.standard_variance
)
self.neutral_content_spon_prob = random.normal(
self.env["neutral_content_spon_prob"], self.env["standard_variance"]
self.model.neutral_content_spon_prob, self.model.standard_variance
)
self.neutral_content_infected_prob = random.normal(
self.env["neutral_content_infected_prob"], self.env["standard_variance"]
self.model.neutral_content_infected_prob, self.model.standard_variance
)
self.discontent_neutral = random.normal(
self.env["discontent_neutral"], self.env["standard_variance"]
self.model.discontent_neutral, self.model.standard_variance
)
self.discontent_content = random.normal(
self.env["discontent_content"], self.env["variance_d_c"]
self.model.discontent_content, self.model.variance_d_c
)
self.content_discontent = random.normal(
self.env["content_discontent"], self.env["variance_c_d"]
self.model.content_discontent, self.model.variance_c_d
)
self.content_neutral = random.normal(
self.env["content_neutral"], self.env["standard_variance"]
self.model.discontent_neutral, self.model.standard_variance
)
@default_state
@state
def neutral(self):
# Spontaneous effects
@@ -70,10 +78,10 @@ class SISaModel(FSM):
# Infected
discontent_neighbors = self.count_neighbors(state_id=self.discontent)
if self.prob(scontent_neighbors * self.neutral_discontent_infected_prob):
if self.prob(discontent_neighbors * self.neutral_discontent_infected_prob):
return self.discontent
content_neighbors = self.count_neighbors(state_id=self.content.id)
if self.prob(s * self.neutral_content_infected_prob):
if self.prob(content_neighbors * self.neutral_content_infected_prob):
return self.content
return self.neutral
@@ -85,7 +93,7 @@ class SISaModel(FSM):
# Superinfected
content_neighbors = self.count_neighbors(state_id=self.content.id)
if self.prob(s * self.discontent_content):
if self.prob(content_neighbors * self.discontent_content):
return self.content
return self.discontent
@@ -97,6 +105,6 @@ class SISaModel(FSM):
# Superinfected
discontent_neighbors = self.count_neighbors(state_id=self.discontent.id)
if self.prob(scontent_neighbors * self.content_discontent):
if self.prob(discontent_neighbors * self.content_discontent):
self.discontent
return self.content

View File

@@ -1,115 +0,0 @@
from . import BaseAgent
class SentimentCorrelationModel(BaseAgent):
"""
Settings:
outside_effects_prob
anger_prob
joy_prob
sadness_prob
disgust_prob
"""
def __init__(self, environment, unique_id=0, state=()):
super().__init__(model=environment, unique_id=unique_id, state=state)
self.outside_effects_prob = environment.environment_params[
"outside_effects_prob"
]
self.anger_prob = environment.environment_params["anger_prob"]
self.joy_prob = environment.environment_params["joy_prob"]
self.sadness_prob = environment.environment_params["sadness_prob"]
self.disgust_prob = environment.environment_params["disgust_prob"]
self.state["time_awareness"] = []
for i in range(4): # In this model we have 4 sentiments
self.state["time_awareness"].append(
0
) # 0-> Anger, 1-> joy, 2->sadness, 3 -> disgust
self.state["sentimentCorrelation"] = 0
def step(self):
self.behaviour()
def behaviour(self):
angry_neighbors_1_time_step = []
joyful_neighbors_1_time_step = []
sad_neighbors_1_time_step = []
disgusted_neighbors_1_time_step = []
angry_neighbors = self.get_neighbors(state_id=1)
for x in angry_neighbors:
if x.state["time_awareness"][0] > (self.env.now - 500):
angry_neighbors_1_time_step.append(x)
num_neighbors_angry = len(angry_neighbors_1_time_step)
joyful_neighbors = self.get_neighbors(state_id=2)
for x in joyful_neighbors:
if x.state["time_awareness"][1] > (self.env.now - 500):
joyful_neighbors_1_time_step.append(x)
num_neighbors_joyful = len(joyful_neighbors_1_time_step)
sad_neighbors = self.get_neighbors(state_id=3)
for x in sad_neighbors:
if x.state["time_awareness"][2] > (self.env.now - 500):
sad_neighbors_1_time_step.append(x)
num_neighbors_sad = len(sad_neighbors_1_time_step)
disgusted_neighbors = self.get_neighbors(state_id=4)
for x in disgusted_neighbors:
if x.state["time_awareness"][3] > (self.env.now - 500):
disgusted_neighbors_1_time_step.append(x)
num_neighbors_disgusted = len(disgusted_neighbors_1_time_step)
anger_prob = self.anger_prob + (
len(angry_neighbors_1_time_step) * self.anger_prob
)
joy_prob = self.joy_prob + (len(joyful_neighbors_1_time_step) * self.joy_prob)
sadness_prob = self.sadness_prob + (
len(sad_neighbors_1_time_step) * self.sadness_prob
)
disgust_prob = self.disgust_prob + (
len(disgusted_neighbors_1_time_step) * self.disgust_prob
)
outside_effects_prob = self.outside_effects_prob
num = self.random.random()
if num < outside_effects_prob:
self.state["id"] = self.random.randint(1, 4)
self.state["sentimentCorrelation"] = self.state[
"id"
] # It is stored when it has been infected for the dynamic network
self.state["time_awareness"][self.state["id"] - 1] = self.env.now
self.state["sentiment"] = self.state["id"]
if num < anger_prob:
self.state["id"] = 1
self.state["sentimentCorrelation"] = 1
self.state["time_awareness"][self.state["id"] - 1] = self.env.now
elif num < joy_prob + anger_prob and num > anger_prob:
self.state["id"] = 2
self.state["sentimentCorrelation"] = 2
self.state["time_awareness"][self.state["id"] - 1] = self.env.now
elif num < sadness_prob + anger_prob + joy_prob and num > joy_prob + anger_prob:
self.state["id"] = 3
self.state["sentimentCorrelation"] = 3
self.state["time_awareness"][self.state["id"] - 1] = self.env.now
elif (
num < disgust_prob + sadness_prob + anger_prob + joy_prob
and num > sadness_prob + anger_prob + joy_prob
):
self.state["id"] = 4
self.state["sentimentCorrelation"] = 4
self.state["time_awareness"][self.state["id"] - 1] = self.env.now
self.state["sentiment"] = self.state["id"]

View File

@@ -20,12 +20,6 @@ from typing import Dict, List
from .. import serialization, utils, time, config
def as_node(agent):
if isinstance(agent, BaseAgent):
return agent.id
return agent
IGNORED_FIELDS = ("model", "logger")
@@ -97,10 +91,6 @@ class BaseAgent(MesaAgent, MutableMapping, metaclass=MetaAgent):
"""
def __init__(self, unique_id, model, name=None, interval=None, **kwargs):
# Check for REQUIRED arguments
# Initialize agent parameters
if isinstance(unique_id, MesaAgent):
raise Exception()
assert isinstance(unique_id, int)
super().__init__(unique_id=unique_id, model=model)
@@ -207,7 +197,8 @@ class BaseAgent(MesaAgent, MutableMapping, metaclass=MetaAgent):
def step(self):
if not self.alive:
raise time.DeadAgent(self.unique_id)
return super().step() or time.Delta(self.interval)
super().step()
return time.Delta(self.interval)
def log(self, message, *args, level=logging.INFO, **kwargs):
if not self.logger.isEnabledFor(level):
@@ -414,7 +405,7 @@ def filter_agents(
if ids:
f = (agents[aid] for aid in ids if aid in agents)
else:
f = (a for a in agents.values())
f = agents.values()
if state_id is not None and not isinstance(state_id, (tuple, list)):
state_id = tuple([state_id])
@@ -564,9 +555,9 @@ def _from_fixed(
def _from_distro(
distro: List[config.AgentDistro],
n: int,
topology: str,
default: config.SingleAgentConfig,
random,
topology: str = None
) -> List[Dict[str, Any]]:
agents = []
@@ -630,14 +621,18 @@ def _from_distro(
from .network_agents import *
from .fsm import *
from .evented import *
class Agent(NetworkAgent, FSM, EventedAgent):
"""Default agent class, has both network and event capabilities"""
from .BassModel import *
from .BigMarketModel import *
from .IndependentCascadeModel import *
from .ModelM2 import *
from .SentimentCorrelationModel import *
from .SISaModel import *
from .CounterModel import *
try:
import scipy
from .Geo import Geo

View File

@@ -1,57 +1,77 @@
from . import BaseAgent
from ..events import Message, Tell, Ask, Reply, TimedOut
from ..time import Cond
from ..events import Message, Tell, Ask, TimedOut
from ..time import BaseCond
from functools import partial
from collections import deque
class Evented(BaseAgent):
class ReceivedOrTimeout(BaseCond):
def __init__(
self, agent, expiration=None, timeout=None, check=True, ignore=False, **kwargs
):
if expiration is None:
if timeout is not None:
expiration = agent.now + timeout
self.expiration = expiration
self.ignore = ignore
self.check = check
super().__init__(**kwargs)
def expired(self, time):
return self.expiration and self.expiration < time
def ready(self, agent, time):
return len(agent._inbox) or self.expired(time)
def return_value(self, agent):
if not self.ignore and self.expired(agent.now):
raise TimedOut("No messages received")
if self.check:
agent.check_messages()
return None
def schedule_next(self, time, delta, first=False):
if self._delta is not None:
delta = self._delta
return (time + delta, self)
def __repr__(self):
return f"ReceivedOrTimeout(expires={self.expiration})"
class EventedAgent(BaseAgent):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self._inbox = deque()
self._received = 0
self._processed = 0
def on_receive(self, *args, **kwargs):
pass
def received(self, expiration=None, timeout=None):
current = self._received
if expiration is None:
expiration = float('inf') if timeout is None else self.now + timeout
def received(self, *args, **kwargs):
return ReceivedOrTimeout(self, *args, **kwargs)
if expiration < self.now:
raise ValueError("Invalid expiration time")
def tell(self, msg, sender=None):
self._inbox.append(Tell(timestamp=self.now, payload=msg, sender=sender))
def ready(agent):
return agent._received > current or agent.now >= expiration
def value(agent):
if agent.now > expiration:
raise TimedOut("No message received")
c = Cond(func=ready, return_func=value)
c._checked = True
return c
def tell(self, msg, sender):
self._received += 1
self._inbox.append(Tell(payload=msg, sender=sender))
def ask(self, msg, timeout=None):
self._received += 1
ask = Ask(payload=msg)
def ask(self, msg, timeout=None, **kwargs):
ask = Ask(timestamp=self.now, payload=msg, sender=self)
self._inbox.append(ask)
expiration = float('inf') if timeout is None else self.now + timeout
return ask.replied(expiration=expiration)
expiration = float("inf") if timeout is None else self.now + timeout
return ask.replied(expiration=expiration, **kwargs)
def check_messages(self):
changed = False
while self._inbox:
msg = self._inbox.popleft()
self._processed += 1
if msg.expired(self.now):
continue
changed = True
reply = self.on_receive(msg.payload, sender=msg.sender)
if isinstance(msg, Ask):
msg.reply = reply
return changed
Evented = EventedAgent

View File

@@ -38,8 +38,6 @@ def state(name=None):
self._last_return = None
self._last_except = None
func.id = name or func.__name__
func.is_default = False
return func

View File

@@ -14,8 +14,11 @@ class NetworkAgent(BaseAgent):
def count_neighbors(self, state_id=None, **kwargs):
return len(self.get_neighbors(state_id=state_id, **kwargs))
def iter_neighbors(self, **kwargs):
return self.iter_agents(limit_neighbors=True, **kwargs)
def get_neighbors(self, **kwargs):
return list(self.iter_agents(limit_neighbors=True, **kwargs))
return list(self.iter_neighbors())
@property
def node(self):
@@ -54,7 +57,7 @@ class NetworkAgent(BaseAgent):
return G
def remove_node(self):
print(f"Removing node for {self.unique_id}: {self.node_id}")
self.debug(f"Removing node for {self.unique_id}: {self.node_id}")
self.G.remove_node(self.node_id)
self.node_id = None
@@ -80,3 +83,6 @@ class NetworkAgent(BaseAgent):
if remove:
self.remove_node()
return super().die()
NetAgent = NetworkAgent

View File

@@ -37,13 +37,8 @@ class Topology(BaseModel):
links: List[Edge]
class NetParams(BaseModel, extra=Extra.allow):
generator: Union[Callable, str]
n: int
class NetConfig(BaseModel):
params: Optional[NetParams]
params: Optional[Dict[str, Any]]
fixed: Optional[Union[Topology, nx.Graph]]
path: Optional[str]
@@ -135,9 +130,11 @@ class Config(BaseModel, extra=Extra.allow):
num_trials: int = 1
max_time: float = 100
max_steps: int = -1
num_processes: int = 1
interval: float = 1
seed: str = ""
dry_run: bool = False
skip_test: bool = False
model_class: Union[Type, str] = environment.Environment
model_params: Optional[Dict[str, Any]] = {}

View File

@@ -1,6 +1,17 @@
from mesa import DataCollector as MDC
class SoilDataCollector(MDC):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
class SoilCollector(MDC):
def __init__(self, model_reporters=None, agent_reporters=None, tables=None, **kwargs):
model_reporters = model_reporters or {}
agent_reporters = agent_reporters or {}
tables = tables or {}
if 'agent_count' not in model_reporters:
model_reporters['agent_count'] = lambda m: m.schedule.get_agent_count()
if 'state_id' not in agent_reporters:
agent_reporters['agent_id'] = lambda agent: agent.get('state_id', None)
super().__init__(model_reporters=model_reporters,
agent_reporters=agent_reporters,
tables=tables,
**kwargs)

View File

@@ -6,7 +6,7 @@ import math
import logging
import inspect
from typing import Any, Dict, Optional, Union
from typing import Any, Dict, Optional, Union, List
from collections import namedtuple
from time import time as current_time
from copy import deepcopy
@@ -16,9 +16,8 @@ from networkx.readwrite import json_graph
import networkx as nx
from mesa import Model
from mesa.datacollection import DataCollector
from . import agents as agentmod, config, serialization, utils, time, network, events
from . import agents as agentmod, config, datacollection, serialization, utils, time, network, events
class BaseEnvironment(Model):
@@ -38,11 +37,12 @@ class BaseEnvironment(Model):
self,
id="unnamed_env",
seed="default",
schedule=None,
schedule_class=time.TimedActivation,
dir_path=None,
interval=1,
agent_class=None,
agents: [tuple[type, Dict[str, Any]]] = {},
agents: List[tuple[type, Dict[str, Any]]] = {},
collector_class: type = datacollection.SoilCollector,
agent_reporters: Optional[Any] = None,
model_reporters: Optional[Any] = None,
tables: Optional[Any] = None,
@@ -50,7 +50,6 @@ class BaseEnvironment(Model):
):
super().__init__(seed=seed)
self.env_params = env_params or {}
self.current_id = -1
@@ -58,9 +57,11 @@ class BaseEnvironment(Model):
self.dir_path = dir_path or os.getcwd()
if schedule is None:
schedule = time.TimedActivation(self)
self.schedule = schedule
if schedule_class is None:
schedule_class = time.TimedActivation
else:
schedule_class = serialization.deserialize(schedule_class)
self.schedule = schedule_class(self)
self.agent_class = agent_class or agentmod.BaseAgent
@@ -69,11 +70,14 @@ class BaseEnvironment(Model):
self.logger = utils.logger.getChild(self.id)
self.datacollector = DataCollector(
collector_class = serialization.deserialize(collector_class)
self.datacollector = collector_class(
model_reporters=model_reporters,
agent_reporters=agent_reporters,
tables=tables,
)
for (k, v) in env_params.items():
self[k] = v
def _agent_from_dict(self, agent):
"""
@@ -87,7 +91,7 @@ class BaseEnvironment(Model):
return serialization.deserialize(cls)(unique_id=unique_id, model=self, **agent)
def init_agents(self, agents: Union[config.AgentConfig, [Dict[str, Any]]] = {}):
def init_agents(self, agents: Union[config.AgentConfig, List[Dict[str, Any]]] = {}):
"""
Initialize the agents in the model from either a `soil.config.AgentConfig` or a list of
dictionaries that each describes an agent.
@@ -168,31 +172,41 @@ class BaseEnvironment(Model):
Advance one step in the simulation, and update the data collection and scheduler appropriately
"""
super().step()
self.logger.info(
f"--- Step: {self.schedule.steps:^5} - Time: {self.now:^5} ---"
)
# self.logger.info(
# "--- Step: {:^5} - Time: {now:^5} ---", steps=self.schedule.steps, now=self.now
# )
self.schedule.step()
self.datacollector.collect(self)
def __contains__(self, key):
return key in self.env_params
def get(self, key, default=None):
"""
Get the value of an environment attribute.
Return `default` if the value is not set.
"""
return self.env_params.get(key, default)
def __getitem__(self, key):
return self.env_params.get(key)
try:
return getattr(self, key)
except AttributeError:
raise KeyError(f"key {key} not found in environment")
def __delitem__(self, key):
return delattr(self, key)
def __contains__(self, key):
return hasattr(self, key)
def __setitem__(self, key, value):
return self.env_params.__setitem__(key, value)
setattr(self, key, value)
def __str__(self):
return str(self.env_params)
return str(dict(self))
def __len__(self):
return sum(1 for n in self.keys())
def __iter__(self):
return iter(self.agents())
def get(self, key, default=None):
return self[key] if key in self else default
def keys(self):
return (k for k in self.__dict__ if k[0] != "_")
class NetworkEnvironment(BaseEnvironment):
"""
@@ -206,7 +220,12 @@ class NetworkEnvironment(BaseEnvironment):
agents = kwargs.pop("agents", None)
super().__init__(*args, agents=None, **kwargs)
self._set_topology(topology)
if topology is None:
topology = nx.Graph()
elif not isinstance(topology, nx.Graph):
topology = network.from_config(topology, dir_path=self.dir_path)
self.G = topology
self.init_agents(agents)
@@ -214,14 +233,14 @@ class NetworkEnvironment(BaseEnvironment):
"""Initialize the agents from a"""
super().init_agents(*args, **kwargs)
for agent in self.schedule._agents.values():
if hasattr(agent, "node_id"):
self._init_node(agent)
self._init_node(agent)
def _init_node(self, agent):
"""
Make sure the node for a given agent has the proper attributes.
"""
self.G.nodes[agent.node_id]["agent"] = agent
if hasattr(agent, "node_id"):
self.G.nodes[agent.node_id]["agent"] = agent
def _agent_dict_from_config(self, cfg):
return agentmod.from_config(cfg, topology=self.G, random=self.random)
@@ -242,6 +261,7 @@ class NetworkEnvironment(BaseEnvironment):
agent["unique_id"] = unique_id
agent["topology"] = self.G
node_attrs = self.G.nodes[node_id]
node_attrs.pop('agent', None)
node_attrs.update(agent)
agent = node_attrs
@@ -250,17 +270,9 @@ class NetworkEnvironment(BaseEnvironment):
return a
def _set_topology(self, cfg=None, dir_path=None):
if cfg is None:
cfg = nx.Graph()
elif not isinstance(cfg, nx.Graph):
cfg = network.from_config(cfg, dir_path=dir_path or self.dir_path)
self.G = cfg
@property
def network_agents(self):
for a in self.schedule._agents:
for a in self.schedule._agents.values():
if isinstance(a, agentmod.NetworkAgent):
yield a
@@ -292,7 +304,7 @@ class NetworkEnvironment(BaseEnvironment):
def add_agent(self, *args, **kwargs):
a = super().add_agent(*args, **kwargs)
if "node_id" in a:
if hasattr(a, "node_id"):
assert self.G.nodes[a.node_id]["agent"] == a
return a
@@ -307,18 +319,31 @@ class NetworkEnvironment(BaseEnvironment):
if "agent" in node:
continue
a_class = self.random.choices(agent_class, weights)[0]
self.add_agent(node_id=node_id, agent_class=a_class, **agent_params)
self.add_agent(node_id=node_id, topology=self.G, agent_class=a_class, **agent_params)
Environment = NetworkEnvironment
class EventedEnvironment(Environment):
def broadcast(self, msg, sender, expiration=None, ttl=None, **kwargs):
class EventedEnvironment(BaseEnvironment):
def broadcast(self, msg, sender=None, expiration=None, ttl=None, **kwargs):
for agent in self.agents(**kwargs):
self.logger.info(f'Telling {repr(agent)}: {msg} ttl={ttl}')
if agent == sender:
continue
self.logger.info(f"Telling {repr(agent)}: {msg} ttl={ttl}")
try:
agent._inbox.append(events.Tell(payload=msg, sender=sender, expiration=expiration if ttl is None else self.now+ttl))
inbox = agent._inbox
except AttributeError:
self.info(f'Agent {agent.unique_id} cannot receive events')
self.logger.info(
f"Agent {agent.unique_id} cannot receive events because it does not have an inbox"
)
continue
# Allow for AttributeError exceptions in this part of the code
inbox.append(
events.Tell(
payload=msg,
sender=sender,
expiration=expiration if ttl is None else self.now + ttl,
)
)
class Environment(NetworkEnvironment, EventedEnvironment):
"""Default environment class, has both network and event capabilities"""

View File

@@ -1,38 +1,51 @@
from .time import Cond
from .time import BaseCond
from dataclasses import dataclass, field
from typing import Any
from uuid import uuid4
class Event:
pass
@dataclass
class Message:
payload: Any
sender: Any = None
expiration: float = None
timestamp: float = None
id: int = field(default_factory=uuid4)
def expired(self, when):
return self.expiration is not None and self.expiration < when
class Reply(Message):
source: Message
class ReplyCond(BaseCond):
def __init__(self, ask, *args, **kwargs):
self._ask = ask
super().__init__(*args, **kwargs)
def ready(self, agent, time):
return self._ask.reply is not None or self._ask.expired(time)
def return_value(self, agent):
if self._ask.expired(agent.now):
raise TimedOut()
return self._ask.reply
def __repr__(self):
return f"ReplyCond({self._ask.id})"
class Ask(Message):
reply: Message = None
def replied(self, expiration=None):
def ready(agent):
return self.reply is not None or agent.now > expiration
def value(agent):
if agent.now > expiration:
raise TimedOut(f'No answer received for {self}')
return self.reply
return Cond(func=ready, return_func=value)
return ReplyCond(self)
class Tell(Message):

View File

@@ -104,17 +104,15 @@ def get_dc_dfs(dc, trial_id=None):
yield from dfs.items()
class default(Exporter):
"""Default exporter. Writes sqlite results, as well as the simulation YAML"""
class SQLite(Exporter):
"""Writes sqlite results"""
def sim_start(self):
if self.dry_run:
logger.info("NOT dumping results")
return
logger.info("Dumping results to %s", self.outdir)
with self.output(self.simulation.name + ".dumped.yml") as f:
f.write(self.simulation.to_yaml())
self.dbpath = os.path.join(self.outdir, f"{self.simulation.name}.sqlite")
logger.info("Dumping results to %s", self.dbpath)
try_backup(self.dbpath, remove=True)
def trial_end(self, env):
@@ -131,7 +129,6 @@ class default(Exporter):
for (t, df) in self.get_dfs(env):
df.to_sql(t, con=engine, if_exists="append")
class csv(Exporter):
"""Export the state of each environment (and its agents) in a separate CSV file"""
@@ -199,15 +196,61 @@ class summary(Exporter):
"""Print a summary of each trial to sys.stdout"""
def trial_end(self, env):
msg = ""
for (t, df) in self.get_dfs(env):
if not len(df):
continue
msg = indent(str(df.describe()), " ")
logger.info(
dedent(
f"""
tabs = "\t" * 2
description = indent(str(df.describe()), tabs)
last_line = indent(str(df.iloc[-1:]), tabs)
# value_counts = indent(str(df.value_counts()), tabs)
value_counts = indent(str(df.apply(lambda x: x.value_counts()).T.stack()), tabs)
msg += dedent("""
Dataframe {t}:
"""
)
+ msg
)
Last line: :
{last_line}
Description:
{description}
Value counts:
{value_counts}
""").format(**locals())
logger.info(msg)
class YAML(Exporter):
"""Writes the configuration of the simulation to a YAML file"""
def sim_start(self):
if self.dry_run:
logger.info("NOT dumping results")
return
with self.output(self.simulation.name + ".dumped.yml") as f:
logger.info(f"Dumping simulation configuration to {self.outdir}")
f.write(self.simulation.to_yaml())
class default(Exporter):
"""Default exporter. Writes sqlite results, as well as the simulation YAML"""
def __init__(self, *args, exporter_cls=[], **kwargs):
exporter_cls = exporter_cls or [YAML, SQLite, summary]
self.inner = [cls(*args, **kwargs) for cls in exporter_cls]
def sim_start(self):
for exporter in self.inner:
exporter.sim_start()
def sim_end(self):
for exporter in self.inner:
exporter.sim_end()
def trial_start(self, env):
for exporter in self.inner:
exporter.trial_start(env)
def trial_end(self, env):
for exporter in self.inner:
exporter.trial_end(env)

View File

@@ -30,7 +30,7 @@ def from_config(cfg: config.NetConfig, dir_path: str = None):
return method(path, **kwargs)
if cfg.params:
net_args = cfg.params.dict()
net_args = dict(cfg.params)
net_gen = net_args.pop("generator")
if dir_path not in sys.path:
@@ -59,7 +59,6 @@ def find_unassigned(G, shuffle=False, random=random):
If node_id is None, a node without an agent_id will be found.
"""
# TODO: test
candidates = list(G.nodes(data=True))
if shuffle:
random.shuffle(candidates)

View File

@@ -146,7 +146,10 @@ def serialize(v, known_modules=KNOWN_MODULES):
def serialize_dict(d, known_modules=KNOWN_MODULES):
d = dict(d)
try:
d = dict(d)
except (ValueError, TypeError) as ex:
return serialize(d)[0]
for (k, v) in d.items():
if isinstance(v, dict):
d[k] = serialize_dict(v, known_modules=known_modules)
@@ -221,8 +224,6 @@ def deserialize(type_, value=None, globs=None, **kwargs):
def deserialize_all(names, *args, known_modules=KNOWN_MODULES, **kwargs):
"""Return the list of deserialized objects"""
# TODO: remove
print("SERIALIZATION", kwargs)
objects = []
for name in names:
mod = deserialize(name, known_modules=known_modules)

View File

@@ -48,12 +48,17 @@ class Simulation:
max_steps: int = -1
interval: int = 1
num_trials: int = 1
parallel: Optional[bool] = None
exporters: Optional[List[str]] = field(default_factory=list)
num_processes: Optional[int] = 1
parallel: Optional[bool] = False
exporters: Optional[List[str]] = field(default_factory=lambda: [exporters.default])
model_reporters: Optional[Dict[str, Any]] = field(default_factory=dict)
agent_reporters: Optional[Dict[str, Any]] = field(default_factory=dict)
tables: Optional[Dict[str, Any]] = field(default_factory=dict)
outdir: Optional[str] = None
exporter_params: Optional[Dict[str, Any]] = field(default_factory=dict)
dry_run: bool = False
extra: Dict[str, Any] = field(default_factory=dict)
skip_test: Optional[bool] = False
@classmethod
def from_dict(cls, env, **kwargs):
@@ -66,7 +71,7 @@ class Simulation:
if ignored:
d.setdefault("extra", {}).update(ignored)
if ignored:
print(f'Warning: Ignoring these parameters (added to "extra"): { ignored }')
logger.warning(f'Ignoring these parameters (added to "extra"): { ignored }')
d.update(kwargs)
return cls(**d)
@@ -89,7 +94,7 @@ class Simulation:
def run_gen(
self,
parallel=False,
num_processes=1,
dry_run=None,
exporters=None,
outdir=None,
@@ -128,7 +133,7 @@ class Simulation:
for env in utils.run_parallel(
func=self.run_trial,
iterable=range(int(self.num_trials)),
parallel=parallel,
num_processes=num_processes,
log_level=log_level,
**kwargs,
):
@@ -158,8 +163,12 @@ class Simulation:
params.update(model_params)
params.update(kwargs)
agent_reporters = deserialize_reporters(params.pop("agent_reporters", {}))
model_reporters = deserialize_reporters(params.pop("model_reporters", {}))
agent_reporters = self.agent_reporters.copy()
agent_reporters.update(deserialize_reporters(params.pop("agent_reporters", {})))
model_reporters = self.model_reporters.copy()
model_reporters.update(deserialize_reporters(params.pop("model_reporters", {})))
tables = self.tables.copy()
tables.update(deserialize_reporters(params.pop("tables", {})))
env = serialization.deserialize(self.model_class)
return env(
@@ -168,6 +177,7 @@ class Simulation:
dir_path=self.dir_path,
agent_reporters=agent_reporters,
model_reporters=model_reporters,
tables=tables,
**params,
)
@@ -234,12 +244,7 @@ Model stats:
def to_dict(self):
d = asdict(self)
if not isinstance(d["model_class"], str):
d["model_class"] = serialization.name(d["model_class"])
d["model_params"] = serialization.serialize_dict(d["model_params"])
d["dir_path"] = str(d["dir_path"])
d["version"] = "2"
return d
return serialization.serialize_dict(d)
def to_yaml(self):
return yaml.dump(self.to_dict())
@@ -261,6 +266,24 @@ def from_config(conf_or_path):
raise AttributeError("Provide only one configuration")
return lst[0]
def iter_from_py(pyfile, module_name='custom_simulation'):
"""Try to load every Simulation instance in a given Python file"""
import importlib
import inspect
spec = importlib.util.spec_from_file_location(module_name, pyfile)
module = importlib.util.module_from_spec(spec)
sys.modules[module_name] = module
spec.loader.exec_module(module)
# import pdb;pdb.set_trace()
for (_name, sim) in inspect.getmembers(module, lambda x: isinstance(x, Simulation)):
yield sim
del sys.modules[module_name]
def from_py(pyfile):
return next(iter_from_py(pyfile))
def run_from_config(*configs, **kwargs):
for sim in iter_from_config(*configs):

View File

@@ -1,10 +1,11 @@
from mesa.time import BaseScheduler
from queue import Empty
from heapq import heappush, heappop, heapify
from heapq import heappush, heappop
import math
from inspect import getsource
from numbers import Number
from textwrap import dedent
from .utils import logger
from mesa import Agent as MesaAgent
@@ -23,65 +24,11 @@ class When:
return time
self._time = time
def next(self, time):
def abs(self, time):
return self._time
def abs(self, time):
return self
def __repr__(self):
return str(f"When({self._time})")
def __lt__(self, other):
if isinstance(other, Number):
return self._time < other
return self._time < other.next(self._time)
def __gt__(self, other):
if isinstance(other, Number):
return self._time > other
return self._time > other.next(self._time)
def ready(self, agent):
return self._time <= agent.model.schedule.time
def return_value(self, agent):
return None
class Cond(When):
def __init__(self, func, delta=1, return_func=lambda agent: None):
self._func = func
self._delta = delta
self._checked = False
self._return_func = return_func
def next(self, time):
if self._checked:
return time + self._delta
return time
def abs(self, time):
return self
def ready(self, agent):
self._checked = True
return self._func(agent)
def return_value(self, agent):
return self._return_func(agent)
def __eq__(self, other):
return False
def __lt__(self, other):
return True
def __gt__(self, other):
return False
def __repr__(self):
return str(f'Cond("{getsource(self._func)}")')
def schedule_next(self, time, delta, first=False):
return (self._time, None)
NEVER = When(INFINITY)
@@ -91,48 +38,94 @@ class Delta(When):
def __init__(self, delta):
self._delta = delta
def abs(self, time):
return self._time + self._delta
def __eq__(self, other):
if isinstance(other, Delta):
return self._delta == other._delta
return False
def abs(self, time):
return When(self._delta + time)
def next(self, time):
return time + self._delta
def schedule_next(self, time, delta, first=False):
return (time + self._delta, None)
def __repr__(self):
return str(f"Delta({self._delta})")
class BaseCond:
def __init__(self, msg=None, delta=None, eager=False):
self._msg = msg
self._delta = delta
self.eager = eager
def schedule_next(self, time, delta, first=False):
if first and self.eager:
return (time, self)
if self._delta:
delta = self._delta
return (time + delta, self)
def return_value(self, agent):
return None
def __repr__(self):
return self._msg or self.__class__.__name__
class Cond(BaseCond):
def __init__(self, func, *args, **kwargs):
self._func = func
super().__init__(*args, **kwargs)
def ready(self, agent, time):
return self._func(agent)
def __repr__(self):
if self._msg:
return self._msg
return str(f'Cond("{dedent(getsource(self._func)).strip()}")')
class TimedActivation(BaseScheduler):
"""A scheduler which activates each agent when the agent requests.
In each activation, each agent will update its 'next_time'.
"""
def __init__(self, *args, **kwargs):
def __init__(self, *args, shuffle=True, **kwargs):
super().__init__(*args, **kwargs)
self._next = {}
self._queue = []
self.next_time = 0
self._shuffle = shuffle
self.step_interval = 1
self.logger = logger.getChild(f"time_{ self.model }")
def add(self, agent: MesaAgent, when=None):
if when is None:
when = When(self.time)
elif not isinstance(when, When):
when = When(when)
if agent.unique_id in self._agents:
del self._agents[agent.unique_id]
if agent.unique_id in self._next:
self._queue.remove((self._next[agent.unique_id], agent))
heapify(self._queue)
when = self.time
elif isinstance(when, When):
when = when.abs()
self._next[agent.unique_id] = when
heappush(self._queue, (when, agent))
self._schedule(agent, None, when)
super().add(agent)
def _schedule(self, agent, condition=None, when=None):
if condition:
if not when:
when, condition = condition.schedule_next(
when or self.time, self.step_interval
)
else:
if when is None:
when = self.time + self.step_interval
condition = None
if self._shuffle:
key = (when, self.model.random.random(), condition)
else:
key = (when, agent.unique_id, condition)
self._next[agent.unique_id] = key
heappush(self._queue, (key, agent))
def step(self) -> None:
"""
Executes agents in order, one at a time. After each step,
@@ -140,76 +133,75 @@ class TimedActivation(BaseScheduler):
"""
self.logger.debug(f"Simulation step {self.time}")
if not self.model.running:
if not self.model.running or self.time == INFINITY:
return
when = NEVER
to_process = []
skipped = []
next_time = INFINITY
ix = 0
self.logger.debug(f"Queue length: {len(self._queue)}")
self.logger.debug("Queue length: {ql}", ql=len(self._queue))
while self._queue:
(when, agent) = self._queue[0]
((when, _id, cond), agent) = self._queue[0]
if when > self.time:
break
heappop(self._queue)
if when.ready(agent):
if cond:
if not cond.ready(agent, self.time):
self._schedule(agent, cond)
continue
try:
agent._last_return = when.return_value(agent)
agent._last_return = cond.return_value(agent)
except Exception as ex:
agent._last_except = ex
else:
agent._last_return = None
agent._last_except = None
self._next.pop(agent.unique_id, None)
to_process.append(agent)
continue
next_time = min(next_time, when.next(self.time))
self._next[agent.unique_id] = next_time
skipped.append((when, agent))
if self._queue:
next_time = min(next_time, self._queue[0][0].next(self.time))
self._queue = [*skipped, *self._queue]
for agent in to_process:
self.logger.debug(f"Stepping agent {agent}")
self.logger.debug("Stepping agent {agent}", agent=agent)
self._next.pop(agent.unique_id, None)
try:
returned = ((agent.step() or Delta(1))).abs(self.time)
returned = agent.step()
except DeadAgent:
if agent.unique_id in self._next:
del self._next[agent.unique_id]
agent.alive = False
continue
# Check status for MESA agents
if not getattr(agent, "alive", True):
continue
value = returned.next(self.time)
agent._last_return = value
if value < self.time:
raise Exception(
f"Cannot schedule an agent for a time in the past ({when} < {self.time})"
if returned:
next_check = returned.schedule_next(
self.time, self.step_interval, first=True
)
if value < INFINITY:
next_time = min(value, next_time)
self._next[agent.unique_id] = returned
heappush(self._queue, (returned, agent))
self._schedule(agent, when=next_check[0], condition=next_check[1])
else:
assert not self._next[agent.unique_id]
next_check = (self.time + self.step_interval, None)
self._schedule(agent)
self.steps += 1
self.logger.debug(f"Updating time step: {self.time} -> {next_time}")
self.time = next_time
if not self._queue or next_time == INFINITY:
if not self._queue:
self.time = INFINITY
self.model.running = False
return self.time
next_time = self._queue[0][0][0]
if next_time < self.time:
raise Exception(
f"An agent has been scheduled for a time in the past, there is probably an error ({when} < {self.time})"
)
self.logger.debug(f"Updating time step: {self.time} -> {next_time}")
self.time = next_time
class ShuffledTimedActivation(TimedActivation):
def __init__(self, *args, **kwargs):
super().__init__(*args, shuffle=True, **kwargs)
class OrderedTimedActivation(TimedActivation):
def __init__(self, *args, **kwargs):
super().__init__(*args, shuffle=False, **kwargs)

View File

@@ -5,7 +5,7 @@ import traceback
from functools import partial
from shutil import copyfile, move
from multiprocessing import Pool
from multiprocessing import Pool, cpu_count
from contextlib import contextmanager
@@ -24,7 +24,7 @@ consoleHandler = logging.StreamHandler()
consoleHandler.setFormatter(logFormatter)
logging.basicConfig(
level=logging.INFO,
level=logging.DEBUG,
handlers=[
consoleHandler,
],
@@ -140,9 +140,11 @@ def run_and_return_exceptions(func, *args, **kwargs):
return ex
def run_parallel(func, iterable, parallel=False, **kwargs):
if parallel and not os.environ.get("SOIL_DEBUG", None):
p = Pool()
def run_parallel(func, iterable, num_processes=1, **kwargs):
if num_processes > 1 and not os.environ.get("SOIL_DEBUG", None):
if num_processes < 1:
num_processes = cpu_count() - num_processes
p = Pool(processes=num_processes)
wrapped_func = partial(run_and_return_exceptions, func, **kwargs)
for i in p.imap_unordered(wrapped_func, iterable):
if isinstance(i, Exception):

View File

@@ -12,34 +12,34 @@ class Dead(agents.FSM):
return self.die()
class TestMain(TestCase):
class TestAgents(TestCase):
def test_die_returns_infinity(self):
'''The last step of a dead agent should return time.INFINITY'''
"""The last step of a dead agent should return time.INFINITY"""
d = Dead(unique_id=0, model=environment.Environment())
ret = d.step().abs(0)
print(ret, "next")
ret = d.step()
assert ret == stime.NEVER
def test_die_raises_exception(self):
'''A dead agent should raise an exception if it is stepped after death'''
"""A dead agent should raise an exception if it is stepped after death"""
d = Dead(unique_id=0, model=environment.Environment())
d.step()
with pytest.raises(stime.DeadAgent):
d.step()
def test_agent_generator(self):
'''
"""
The step function of an agent could be a generator. In that case, the state of the
agent will be resumed after every call to step.
'''
"""
a = 0
class Gen(agents.BaseAgent):
def step(self):
nonlocal a
for i in range(5):
yield
a += 1
e = environment.Environment()
g = Gen(model=e, unique_id=e.next_id())
e.schedule.add(g)
@@ -51,8 +51,9 @@ class TestMain(TestCase):
def test_state_decorator(self):
class MyAgent(agents.FSM):
run = 0
@agents.default_state
@agents.state('original')
@agents.state("original")
def root(self):
self.run += 1
return self.other
@@ -66,4 +67,97 @@ class TestMain(TestCase):
a.step()
assert a.run == 1
a.step()
assert a.run == 2
def test_broadcast(self):
"""
An agent should be able to broadcast messages to every other agent, AND each receiver should be able
to process it
"""
class BCast(agents.Evented):
pings_received = 0
def step(self):
print(self.model.broadcast)
try:
self.model.broadcast("PING")
except Exception as ex:
print(ex)
while True:
self.check_messages()
yield
def on_receive(self, msg, sender=None):
self.pings_received += 1
e = environment.EventedEnvironment()
for i in range(10):
e.add_agent(agent_class=BCast)
e.step()
pings_received = lambda: [a.pings_received for a in e.agents]
assert sorted(pings_received()) == list(range(1, 11))
e.step()
assert all(x == 10 for x in pings_received())
def test_ask_messages(self):
"""
An agent should be able to ask another agent, and wait for a response.
"""
# There are two agents, they try to send pings
# This is arguably a very contrived example. In practice, the or
# There should be a delay of one step between agent 0 and 1
# On the first step:
# Agent 0 sends a PING, but blocks before a PONG
# Agent 1 detects the PING, responds with a PONG, and blocks after its own PING
# After that step, every agent can both receive (there are pending messages) and send.
# In each step, for each agent, one message is sent, and another one is received
# (although not necessarily in that order).
# Results depend on ordering (agents are normally shuffled)
# so we force the timedactivation not to be shuffled
pings = []
pongs = []
responses = []
class Ping(agents.EventedAgent):
def step(self):
target_id = (self.unique_id + 1) % self.count_agents()
target = self.model.agents[target_id]
print("starting")
while True:
if pongs or not pings: # First agent, or anyone after that
pings.append(self.now)
response = yield target.ask("PING")
responses.append(response)
else:
print("NOT sending ping")
print("Checking msgs")
# Do not block if we have already received a PING
if not self.check_messages():
yield self.received()
print("done")
def on_receive(self, msg, sender=None):
if msg == "PING":
pongs.append(self.now)
return "PONG"
raise Exception("This should never happen")
e = environment.EventedEnvironment(schedule_class=stime.OrderedTimedActivation)
for i in range(2):
e.add_agent(agent_class=Ping)
assert e.now == 0
for i in range(5):
e.step()
time = i + 1
assert e.now == time
assert len(pings) == 2 * time
assert len(pongs) == (2 * time) - 1
# Every step between 0 and t appears twice
assert sum(pings) == sum(range(time)) * 2
# It is the same as pings, without the leading 0
assert sum(pongs) == sum(range(time)) * 2

View File

@@ -99,7 +99,7 @@ class TestConfig(TestCase):
with utils.timer("serializing"):
serial = s.to_yaml()
with utils.timer("recovering"):
recovered = yaml.load(serial, Loader=yaml.SafeLoader)
recovered = yaml.load(serial, Loader=yaml.FullLoader)
for (k, v) in config.items():
assert recovered[k] == v
@@ -109,24 +109,23 @@ def make_example_test(path, cfg):
root = os.getcwd()
print(path)
s = simulation.from_config(cfg)
# for s in simulation.all_from_config(path):
# iterations = s.config.max_time * s.config.num_trials
# if iterations > 1000:
# s.config.max_time = 100
# s.config.num_trials = 1
# if config.get('skip_test', False) and not FORCE_TESTS:
# self.skipTest('Example ignored.')
# envs = s.run_simulation(dry_run=True)
# assert envs
# for env in envs:
# assert env
# try:
# n = config['network_params']['n']
# assert len(list(env.network_agents)) == n
# assert env.now > 0 # It has run
# assert env.now <= config['max_time'] # But not further than allowed
# except KeyError:
# pass
iterations = s.max_time * s.num_trials
if iterations > 1000:
s.max_time = 100
s.num_trials = 1
if cfg.skip_test and not FORCE_TESTS:
self.skipTest('Example ignored.')
envs = s.run_simulation(dry_run=True)
assert envs
for env in envs:
assert env
try:
n = cfg.model_params['topology']['params']['n']
assert len(list(env.network_agents)) == n
assert env.now > 0 # It has run
assert env.now <= cfg.max_time # But not further than allowed
except KeyError:
pass
return wrapped

View File

@@ -1,8 +1,9 @@
from unittest import TestCase
import os
from os.path import join
from glob import glob
from soil import serialization, simulation, config
from soil import simulation, config
ROOT = os.path.abspath(os.path.dirname(__file__))
EXAMPLES = join(ROOT, "..", "examples")
@@ -14,42 +15,49 @@ class TestExamples(TestCase):
pass
def make_example_test(path, cfg):
def get_test_for_sim(sim, path):
root = os.getcwd()
iterations = sim.max_steps * sim.num_trials
if iterations < 0 or iterations > 1000:
sim.max_steps = 100
sim.num_trials = 1
def wrapped(self):
root = os.getcwd()
for s in simulation.iter_from_config(cfg):
iterations = s.max_steps * s.num_trials
if iterations < 0 or iterations > 1000:
s.max_steps = 100
s.num_trials = 1
assert isinstance(cfg, config.Config)
if getattr(cfg, "skip_test", False) and not FORCE_TESTS:
self.skipTest("Example ignored.")
envs = s.run_simulation(dry_run=True)
assert envs
for env in envs:
assert env
try:
n = cfg.model_params["network_params"]["n"]
assert len(list(env.network_agents)) == n
except KeyError:
pass
assert env.schedule.steps > 0 # It has run
assert env.schedule.steps <= s.max_steps # But not further than allowed
envs = sim.run_simulation(dry_run=True)
assert envs
for env in envs:
assert env
try:
n = sim.model_params["network_params"]["n"]
assert len(list(env.network_agents)) == n
except KeyError:
pass
assert env.schedule.steps > 0 # It has run
assert env.schedule.steps <= sim.max_steps # But not further than allowed
return wrapped
def add_example_tests():
for cfg, path in serialization.load_files(
join(EXAMPLES, "**", "*.yml"),
):
p = make_example_test(path=path, cfg=config.Config.from_raw(cfg))
sim_paths = []
for path in glob(join(EXAMPLES, '**', '*.yml')):
if "soil_output" in path:
continue
for sim in simulation.iter_from_config(path):
sim_paths.append((sim, path))
for path in glob(join(EXAMPLES, '**', '*.py')):
for sim in simulation.iter_from_py(path):
sim_paths.append((sim, path))
for (sim, path) in sim_paths:
if sim.skip_test and not FORCE_TESTS:
continue
test_case = get_test_for_sim(sim, path)
fname = os.path.basename(path)
p.__name__ = "test_example_file_%s" % fname
p.__doc__ = "%s should be a valid configuration" % fname
setattr(TestExamples, p.__name__, p)
del p
test_case.__name__ = "test_example_file_%s" % fname
test_case.__doc__ = "%s should be a valid configuration" % fname
setattr(TestExamples, test_case.__name__, test_case)
del test_case
add_example_tests()

View File

@@ -172,25 +172,24 @@ class TestMain(TestCase):
assert len(configs) > 0
def test_until(self):
config = {
"name": "until_sim",
"model_params": {
"network_params": {},
"agents": {
"fixed": [
{
"agent_class": agents.BaseAgent,
}
]
},
},
"max_time": 2,
"num_trials": 50,
}
s = simulation.from_config(config)
n_runs = 0
class CheckRun(agents.BaseAgent):
def step(self):
nonlocal n_runs
n_runs += 1
return super().step()
n_trials = 50
max_time = 2
s = simulation.Simulation(
model_params={"agents": [{"agent_class": CheckRun}]},
num_trials=n_trials,
max_time=max_time,
)
runs = list(s.run_simulation(dry_run=True))
over = list(x.now for x in runs if x.now > 2)
assert len(runs) == config["num_trials"]
assert len(runs) == n_trials
assert len(over) == 0
def test_fsm(self):

View File

@@ -72,7 +72,7 @@ class TestNetwork(TestCase):
assert len(env.agents) == 2
assert env.agents[1].count_agents(state_id="normal") == 2
assert env.agents[1].count_agents(state_id="normal", limit_neighbors=True) == 1
assert env.agents[0].neighbors == 1
assert env.agents[0].count_neighbors() == 1
def test_custom_agent_neighbors(self):
"""Allow for search of neighbors with a certain state_id"""
@@ -90,7 +90,7 @@ class TestNetwork(TestCase):
env = s.run_simulation(dry_run=True)[0]
assert env.agents[1].count_agents(state_id="normal") == 2
assert env.agents[1].count_agents(state_id="normal", limit_neighbors=True) == 1
assert env.agents[0].neighbors == 1
assert env.agents[0].count_neighbors() == 1
def test_subgraph(self):
"""An agent should be able to subgraph the global topology"""

View File

@@ -2,11 +2,12 @@ from unittest import TestCase
from soil import time, agents, environment
class TestMain(TestCase):
def test_cond(self):
'''
"""
A condition should match a When if the concition is True
'''
"""
t = time.Cond(lambda t: True)
f = time.Cond(lambda t: False)
@@ -16,59 +17,58 @@ class TestMain(TestCase):
assert w is not f
def test_cond(self):
'''
"""
Comparing a Cond to a Delta should always return False
'''
"""
c = time.Cond(lambda t: False)
d = time.Delta(1)
assert c is not d
def test_cond_env(self):
'''
'''
""" """
times_started = []
times_awakened = []
times_asleep = []
times = []
done = 0
done = []
class CondAgent(agents.BaseAgent):
def step(self):
nonlocal done
times_started.append(self.now)
while True:
yield time.Cond(lambda agent: agent.model.schedule.time >= 10)
times_asleep.append(self.now)
yield time.Cond(lambda agent: agent.now >= 10, delta=2)
times_awakened.append(self.now)
if self.now >= 10:
break
done += 1
env = environment.Environment(agents=[{'agent_class': CondAgent}])
done.append(self.now)
env = environment.Environment(agents=[{"agent_class": CondAgent}])
while env.schedule.time < 11:
env.step()
times.append(env.now)
env.step()
assert env.schedule.time == 11
assert times_started == [0]
assert times_awakened == [10]
assert done == 1
assert done == [10]
# The first time will produce the Cond.
# Since there are no other agents, time will not advance, but the number
# of steps will.
assert env.schedule.steps == 12
assert len(times) == 12
assert env.schedule.steps == 6
assert len(times) == 6
while env.schedule.time < 12:
env.step()
while env.schedule.time < 13:
times.append(env.now)
env.step()
assert env.schedule.time == 12
assert times == [0, 2, 4, 6, 8, 10, 11]
assert env.schedule.time == 13
assert times_started == [0, 11]
assert times_awakened == [10, 11]
assert done == 2
assert times_awakened == [10]
assert done == [10]
# Once more to yield the cond, another one to continue
assert env.schedule.steps == 14
assert len(times) == 14
assert env.schedule.steps == 7
assert len(times) == 7