1
0
mirror of https://github.com/gsi-upm/soil synced 2024-11-22 03:02:28 +00:00
* Removed old/unnecessary models
* Added a `simulation.{iter_}from_py` method to load simulations from python
files
* Changed tests of examples to run programmatic simulations
* Fixed programmatic examples
This commit is contained in:
J. Fernando Sánchez 2022-11-13 20:31:05 +01:00
parent d3cee18635
commit 2869b1e1e6
41 changed files with 499 additions and 1100 deletions

View File

@ -5,8 +5,46 @@ Learn how to run your own simulations with our [documentation](http://soilsim.re
Follow our [tutorial](examples/tutorial/soil_tutorial.ipynb) to develop your own agent models.
**Note**: Mesa 0.30 introduced many fundamental changes. Check the [documention on how to update your simulations to work with newer versions](docs/migration_0.30.rst)
# Changes in version 0.3
## SOIL vs MESA
SOIL is a batteries-included platform that builds on top of MESA and provides the following out of the box:
* Integration with (social) networks
* The ability to more easily assign agents to your model (and optionally to its network):
* Assigning agents to nodes, and vice versa
* Using a description (e.g., 2 agents of type `Foo`, 10% of the network should be agents of type `Bar`)
* **Several types of abstractions for agents**:
* Finite state machine, where methods can be turned into a state
* Network agents, which have convenience methods to access the model's topology
* Generator-based agents, whose state is paused though a `yield` and resumed on the next step
* **Reporting and data collection**:
* Soil models include data collection and record some data by default (# of agents, state of each agent, etc.)
* All data collected are exported by default to a SQLite database and a description file
* Options to export to other formats, such as CSV, or defining your own exporters
* A summary of the data collected is shown in the command line, for easy inspection
* **An event-based scheduler**
* Agents can be explicit about when their next time/step should be, and not all agents run in every step. This avoids unnecessary computation.
* Time intervals between each step are flexible.
* There are primitives to specify when the next execution of an agent should be (or conditions)
* **Actor-inspired** message-passing
* A simulation runner (`soil.Simulation`) that can:
* Run models in parallel
* Save results to different formats
* Simulation configuration files
* A command line interface (`soil`), to run multiple
* An integrated debugger (`soil --debug`) with custom functions to print agent states and break at specific states
Nevertheless, most features in SOIL have been designed to integrate with plain Mesa.
For instance, it should be possible to run a `mesa.Model` models using a `soil.Simulation` and the `soil` CLI, or to integrate the `soil.TimedActivation` scheduler on a `mesa.Model`.
Note that some combinations of `mesa` and `soil` components, while technically possible, are much less useful or even wrong.
For instance, you may add any `soil.agent` agent (except for the `soil.NetworkAgent`, as it needs a topology) on a regular `mesa.Model` with a vanilla scheduler from `mesa.time`.
But in that case the agents will not get any of the advanced event-based scheduling, and most agent behaviors that depend on that will greatly vary.
## Changes in version 0.3
Version 0.3 came packed with many changes to provide much better integration with MESA.
For a long time, we tried to keep soil backwards-compatible, but it turned out to be a big endeavour and the resulting code was less readable.
@ -18,27 +56,6 @@ If you have an older Soil simulation, you have two options:
* Update the necessary configuration files and code. You may use the examples in the `examples` folder for reference, as well as the documentation.
* Keep using a previous `soil` version.
## Mesa compatibility
Soil is in the process of becoming fully compatible with MESA.
The idea is to provide a set of modular classes and functions that extend the functionality of mesa, whilst staying compatible.
In the end, it should be possible to add regular mesa agents to a soil simulation, or use a soil agent within a mesa simulation/model.
This is a non-exhaustive list of tasks to achieve compatibility:
- [ ] Integrate `soil.Simulation` with mesa's runners:
- [ ] `soil.Simulation` could mimic/become a `mesa.batchrunner`
- [ ] Integrate `soil.Environment` with `mesa.Model`:
- [x] `Soil.Environment` inherits from `mesa.Model`
- [x] `Soil.Environment` includes a Mesa-like Scheduler (see the `soil.time` module.
- [ ] Allow for `mesa.Model` to be used in a simulation.
- [ ] Integrate `soil.Agent` with `mesa.Agent`:
- [x] Rename agent.id to unique_id?
- [x] mesa agents can be used in soil simulations (see `examples/mesa`)
- [ ] Provide examples
- [ ] Using mesa modules in a soil simulation
- [ ] Using soil modules in a mesa simulation
- [ ] Document the new APIs and usage
## Citation

View File

@ -1,262 +0,0 @@
Configuring a simulation
------------------------
There are two ways to configure a simulation: programmatically and with a configuration file.
In both cases, the parameters used are the same.
The advantage of a configuration file is that it is a clean declarative description, and it makes it easier to reproduce.
Simulation configuration files can be formatted in ``json`` or ``yaml`` and they define all the parameters of a simulation.
Here's an example (``example.yml``).
.. literalinclude:: example.yml
:language: yaml
This example configuration will run three trials (``num_trials``) of a simulation containing a randomly generated network (``network_params``).
The 100 nodes in the network will be SISaModel agents (``network_agents.agent_class``), which is an agent behavior that is included in Soil.
10% of the agents (``weight=1``) will start in the content state, 10% in the discontent state, and the remaining 80% (``weight=8``) in the neutral state.
All agents will have access to the environment (``environment_params``), which only contains one variable, ``prob_infected``.
The state of the agents will be updated every 2 seconds (``interval``).
Now run the simulation with the command line tool:
.. code:: bash
soil example.yml
Once the simulation finishes, its results will be stored in a folder named ``MyExampleSimulation``.
Three types of objects are saved by default: a pickle of the simulation; a ``YAML`` representation of the simulation (which can be used to re-launch it); and for every trial, a ``sqlite`` file with the content of the state of every network node and the environment parameters at every step of the simulation.
.. code::
soil_output
└── MyExampleSimulation
├── MyExampleSimulation.dumped.yml
├── MyExampleSimulation.simulation.pickle
├── MyExampleSimulation_trial_0.db.sqlite
├── MyExampleSimulation_trial_1.db.sqlite
└── MyExampleSimulation_trial_2.db.sqlite
You may also ask soil to export the states in a ``csv`` file, and the network in gephi format (``gexf``).
Network
=======
The network topology for the simulation can be loaded from an existing network file or generated with one of the random network generation methods from networkx.
Loading a network
#################
To load an existing network, specify its path in the configuration:
.. code:: yaml
---
network_params:
path: /tmp/mynetwork.gexf
Soil will try to guess what networkx method to use to read the file based on its extension.
However, we only test using ``gexf`` files.
For simple networks, you may also include them in the configuration itself using , using the ``topology`` parameter like so:
.. code:: yaml
---
topology:
nodes:
- id: First
- id: Second
links:
- source: First
target: Second
Generating a random network
###########################
To generate a random network using one of networkx's built-in methods, specify the `graph generation algorithm <https://networkx.github.io/documentation/development/reference/generators.html>`_ and other parameters.
For example, the following configuration is equivalent to :code:`nx.complete_graph(n=100)`:
.. code:: yaml
network_params:
generator: complete_graph
n: 100
Environment
============
The environment is the place where the shared state of the simulation is stored.
That means both global parameters, such as the probability of disease outbreak.
But it also means other data, such as a map, or a network topology that connects multiple agents.
As a result, it is also typical to add custom functions in an environment that help agents interact with each other and with the state of the simulation.
Last but not least, an environment controls when and how its agents will be executed.
By default, soil environments incorporate a ``soil.time.TimedActivation`` model for agent execution (more on this on the following section).
Soil environments are very similar, and often interchangeable with, mesa models (``mesa.Model``).
A configuration may specify the initial value of the environment parameters:
.. code:: yaml
environment_params:
daily_probability_of_earthquake: 0.001
number_of_earthquakes: 0
All agents have access to the environment (and its parameters).
In some scenarios, it is useful to have a custom environment, to provide additional methods or to control the way agents update environment state.
For example, if our agents play the lottery, the environment could provide a method to decide whether the agent wins, instead of leaving it to the agent.
Agents
======
Agents are a way of modelling behavior.
Agents can be characterized with two variables: agent type (``agent_class``) and state.
The agent type is a ``soil.Agent`` class, which contains the code that encapsulates the behavior of the agent.
The state is a set of variables, which may change during the simulation, and that the code may use to control the behavior.
All agents provide a ``step`` method either explicitly or implicitly (by inheriting it from a superclass), which controls how the agent will behave in each step of the simulation.
When and how agent steps are executed in a simulation depends entirely on the ``environment``.
Most environments will internally use a scheduler (``mesa.time.BaseScheduler``), which controls the activation of agents.
In soil, we generally used the ``soil.time.TimedActivation`` scheduler, which allows agents to specify when their next activation will happen, defaulting to a
When an agent's step is executed (generally, every ``interval`` seconds), the agent has access to its state and the environment.
Through the environment, it can access the network topology and the state of other agents.
There are two types of agents according to how they are added to the simulation: network agents and environment agent.
Network Agents
##############
Network agents are attached to a node in the topology.
The configuration file allows you to specify how agents will be mapped to topology nodes.
The simplest way is to specify a single type of agent.
Hence, every node in the network will be associated to an agent of that type.
.. code:: yaml
agent_class: SISaModel
It is also possible to add more than one type of agent to the simulation.
To control the ratio of each type (using the ``weight`` property).
For instance, with following configuration, it is five times more likely for a node to be assigned a CounterModel type than a SISaModel type.
.. code:: yaml
network_agents:
- agent_class: SISaModel
weight: 1
- agent_class: CounterModel
weight: 5
The third option is to specify the type of agent on the node itself, e.g.:
.. code:: yaml
topology:
nodes:
- id: first
agent_class: BaseAgent
states:
first:
agent_class: SISaModel
This would also work with a randomly generated network:
.. code:: yaml
network:
generator: complete
n: 5
agent_class: BaseAgent
states:
- agent_class: SISaModel
In addition to agent type, you may add a custom initial state to the distribution.
This is very useful to add the same agent type with different states.
e.g., to populate the network with SISaModel, roughly 10% of them with a discontent state:
.. code:: yaml
network_agents:
- agent_class: SISaModel
weight: 9
state:
id: neutral
- agent_class: SISaModel
weight: 1
state:
id: discontent
Lastly, the configuration may include initial state for one or more nodes.
For instance, to add a state for the two nodes in this configuration:
.. code:: yaml
agent_class: SISaModel
network:
generator: complete_graph
n: 2
states:
- id: content
- id: discontent
Or to add state only to specific nodes (by ``id``).
For example, to apply special skills to Linux Torvalds in a simulation:
.. literalinclude:: ../examples/torvalds.yml
:language: yaml
Environment Agents
##################
In addition to network agents, more agents can be added to the simulation.
These agents are programmed in much the same way as network agents, the only difference is that they will not be assigned to network nodes.
.. code::
environment_agents:
- agent_class: MyAgent
state:
mood: happy
- agent_class: DummyAgent
You may use environment agents to model events that a normal agent cannot control, such as natural disasters or chance.
They are also useful to add behavior that has little to do with the network and the interactions within that network.
Templating
==========
Sometimes, it is useful to parameterize a simulation and run it over a range of values in order to compare each run and measure the effect of those parameters in the simulation.
For instance, you may want to run a simulation with different agent distributions.
This can be done in Soil using **templates**.
A template is a configuration where some of the values are specified with a variable.
e.g., ``weight: "{{ var1 }}"`` instead of ``weight: 1``.
There are two types of variables, depending on how their values are decided:
* Fixed. A list of values is provided, and a new simulation is run for each possible value. If more than a variable is given, a new simulation will be run per combination of values.
* Bounded/Sampled. The bounds of the variable are provided, along with a sampler method, which will be used to compute all the configuration combinations.
When fixed and bounded variables are mixed, Soil generates a new configuration per combination of fixed values and bounded values.
Here is an example with a single fixed variable and two bounded variable:
.. literalinclude:: ../examples/template.yml
:language: yaml

View File

@ -3,33 +3,38 @@ name: MyExampleSimulation
max_time: 50
num_trials: 3
interval: 2
network_params:
generator: barabasi_albert_graph
n: 100
m: 2
network_agents:
model_params:
topology:
params:
generator: barabasi_albert_graph
n: 100
m: 2
agents:
distribution:
- agent_class: SISaModel
weight: 1
topology: True
ratio: 0.1
state:
id: content
state_id: content
- agent_class: SISaModel
weight: 1
topology: True
ratio: .1
state:
id: discontent
state_id: discontent
- agent_class: SISaModel
weight: 8
topology: True
ratio: 0.8
state:
id: neutral
environment_params:
prob_infect: 0.075
neutral_discontent_spon_prob: 0.1
neutral_discontent_infected_prob: 0.3
neutral_content_spon_prob: 0.3
neutral_content_infected_prob: 0.4
discontent_neutral: 0.5
discontent_content: 0.5
variance_d_c: 0.2
content_discontent: 0.2
variance_c_d: 0.2
content_neutral: 0.2
standard_variance: 1
state_id: neutral
prob_infect: 0.075
neutral_discontent_spon_prob: 0.1
neutral_discontent_infected_prob: 0.3
neutral_content_spon_prob: 0.3
neutral_content_infected_prob: 0.4
discontent_neutral: 0.5
discontent_content: 0.5
variance_d_c: 0.2
content_discontent: 0.2
variance_c_d: 0.2
content_neutral: 0.2
standard_variance: 1

View File

@ -1,8 +1,3 @@
.. Soil documentation master file, created by
sphinx-quickstart on Tue Apr 25 12:48:56 2017.
You can adapt this file completely to your liking, but it should at least
contain the root `toctree` directive.
Welcome to Soil's documentation!
================================

View File

@ -14,6 +14,10 @@ Now test that it worked by running the command line tool
soil --help
#or
python -m soil --help
Or, if you're using using soil programmatically:
.. code:: python
@ -21,4 +25,4 @@ Or, if you're using using soil programmatically:
import soil
print(soil.__version__)
The latest version can be installed through `GitLab <https://lab.gsi.upm.es/soil/soil.git>`_ or `GitHub <https://github.com/gsi-upm/soil>`_.
The latest version can be installed through `GitHub <https://github.com/gsi-upm/soil>`_ or `GitLab <https://lab.gsi.upm.es/soil/soil.git>`_.

View File

@ -12,7 +12,7 @@ set BUILDDIR=_build
set SPHINXPROJ=Soil
if "%1" == "" goto help
eE
%SPHINXBUILD% >NUL 2>NUL
if errorlevel 9009 (
echo.

22
docs/mesa.rst Normal file
View File

@ -0,0 +1,22 @@
Mesa compatibility
------------------
Soil is in the process of becoming fully compatible with MESA.
The idea is to provide a set of modular classes and functions that extend the functionality of mesa, whilst staying compatible.
In the end, it should be possible to add regular mesa agents to a soil simulation, or use a soil agent within a mesa simulation/model.
This is a non-exhaustive list of tasks to achieve compatibility:
- [ ] Integrate `soil.Simulation` with mesa's runners:
- [ ] `soil.Simulation` could mimic/become a `mesa.batchrunner`
- [ ] Integrate `soil.Environment` with `mesa.Model`:
- [x] `Soil.Environment` inherits from `mesa.Model`
- [x] `Soil.Environment` includes a Mesa-like Scheduler (see the `soil.time` module.
- [ ] Allow for `mesa.Model` to be used in a simulation.
- [ ] Integrate `soil.Agent` with `mesa.Agent`:
- [x] Rename agent.id to unique_id?
- [x] mesa agents can be used in soil simulations (see `examples/mesa`)
- [ ] Provide examples
- [ ] Using mesa modules in a soil simulation
- [ ] Using soil modules in a mesa simulation
- [ ] Document the new APIs and usage

View File

@ -2,29 +2,32 @@
name: quickstart
num_trials: 1
max_time: 1000
network_agents:
- agent_class: SISaModel
state:
id: neutral
weight: 1
- agent_class: SISaModel
state:
id: content
weight: 2
network_params:
n: 100
k: 5
p: 0.2
generator: newman_watts_strogatz_graph
environment_params:
neutral_discontent_spon_prob: 0.05
neutral_discontent_infected_prob: 0.1
neutral_content_spon_prob: 0.2
neutral_content_infected_prob: 0.4
discontent_neutral: 0.2
discontent_content: 0.05
content_discontent: 0.05
variance_d_c: 0.05
variance_c_d: 0.1
content_neutral: 0.1
standard_variance: 0.1
model_params:
agents:
- agent_class: SISaModel
topology: true
state:
id: neutral
weight: 1
- agent_class: SISaModel
topology: true
state:
id: content
weight: 2
topology:
params:
n: 100
k: 5
p: 0.2
generator: newman_watts_strogatz_graph
neutral_discontent_spon_prob: 0.05
neutral_discontent_infected_prob: 0.1
neutral_content_spon_prob: 0.2
neutral_content_infected_prob: 0.4
discontent_neutral: 0.2
discontent_content: 0.05
content_discontent: 0.05
variance_d_c: 0.05
variance_c_d: 0.1
content_neutral: 0.1
standard_variance: 0.1

View File

@ -115,13 +115,13 @@ Here's the code:
@soil.agents.state
def neutral(self):
r = random.random()
if self['has_tv'] and r < self.env['prob_tv_spread']:
if self['has_tv'] and r < self.model['prob_tv_spread']:
return self.infected
return
@soil.agents.state
def infected(self):
prob_infect = self.env['prob_neighbor_spread']
prob_infect = self.model['prob_neighbor_spread']
for neighbor in self.get_neighboring_agents(state_id=self.neutral.id):
r = random.random()
if r < prob_infect:
@ -146,11 +146,11 @@ spreading the rumor.
class NewsEnvironmentAgent(soil.agents.BaseAgent):
def step(self):
if self.now == self['event_time']:
self.env['prob_tv_spread'] = 1
self.env['prob_neighbor_spread'] = 1
self.model['prob_tv_spread'] = 1
self.model['prob_neighbor_spread'] = 1
elif self.now > self['event_time']:
self.env['prob_tv_spread'] = self.env['prob_tv_spread'] * TV_FACTOR
self.env['prob_neighbor_spread'] = self.env['prob_neighbor_spread'] * NEIGHBOR_FACTOR
self.model['prob_tv_spread'] = self.model['prob_tv_spread'] * TV_FACTOR
self.model['prob_neighbor_spread'] = self.model['prob_neighbor_spread'] * NEIGHBOR_FACTOR
Testing the agents
~~~~~~~~~~~~~~~~~~

View File

@ -1,4 +1,5 @@
from soil.agents import FSM, state, default_state
from soil.time import Delta
class Fibonacci(FSM):
@ -11,7 +12,7 @@ class Fibonacci(FSM):
def counting(self):
self.log("Stopping at {}".format(self.now))
prev, self["prev"] = self["prev"], max([self.now, self["prev"]])
return None, self.env.timeout(prev)
return None, Delta(prev)
class Odds(FSM):
@ -21,18 +22,26 @@ class Odds(FSM):
@state
def odds(self):
self.log("Stopping at {}".format(self.now))
return None, self.env.timeout(1 + self.now % 2)
return None, Delta(1 + self.now % 2)
from soil import Simulation
simulation = Simulation(
model_params={
'agents':[
{'agent_class': Fibonacci, 'node_id': 0},
{'agent_class': Odds, 'node_id': 1}
],
'topology': {
'params': {
'generator': 'complete_graph',
'n': 2
}
},
},
max_time=100,
)
if __name__ == "__main__":
from soil import Simulation
s = Simulation(
network_agents=[
{"ids": [0], "agent_class": Fibonacci},
{"ids": [1], "agent_class": Odds},
],
network_params={"generator": "complete_graph", "n": 2},
max_time=100,
)
s.run(dry_run=True)
simulation.run(dry_run=True)

View File

@ -18,6 +18,7 @@ An example scenario could play like the following:
- If there are no more passengers available in the simulation, Drivers die
"""
from __future__ import annotations
from typing import Optional
from soil import *
from soil import events
from mesa.space import MultiGrid
@ -39,7 +40,7 @@ class Journey:
tip: float
passenger: Passenger
driver: Driver = None
driver: Optional[Driver] = None
class City(EventedEnvironment):
@ -239,5 +240,4 @@ simulation = Simulation(
)
if __name__ == "__main__":
with easy(simulation) as s:
s.run()
simulation.run()

View File

@ -111,4 +111,5 @@ server = ModularServer(
)
server.port = 8521
server.launch(open_browser=False)
if __name__ == '__main__':
server.launch(open_browser=False)

View File

@ -28,7 +28,7 @@ class MoneyAgent(MesaAgent):
It will only share wealth with neighbors based on grid proximity
"""
def __init__(self, unique_id, model, wealth=1):
def __init__(self, unique_id, model, wealth=1, **kwargs):
super().__init__(unique_id=unique_id, model=model)
self.wealth = wealth

View File

@ -10,32 +10,48 @@ def mygenerator():
# Add only a node
G = Graph()
G.add_node(1)
G.add_node(2)
return G
class MyAgent(agents.FSM):
times_run = 0
@agents.default_state
@agents.state
def neutral(self):
self.debug("I am running")
if agents.prob(0.2):
if self.prob(0.2):
self.times_run += 1
self.info("This runs 2/10 times on average")
s = Simulation(
simulation = Simulation(
name="Programmatic",
network_params={"generator": mygenerator},
model_params={
'topology': {
'params': {
'generator': mygenerator
},
},
'agents': {
'distribution': [{
'agent_class': MyAgent,
'topology': True,
}]
}
},
seed='Program',
agent_reporters={'times_run': 'times_run'},
num_trials=1,
max_time=100,
agent_class=MyAgent,
dry_run=True,
)
if __name__ == "__main__":
# By default, logging will only print WARNING logs (and above).
# You need to choose a lower logging level to get INFO/DEBUG traces
logging.basicConfig(level=logging.INFO)
envs = simulation.run()
# By default, logging will only print WARNING logs (and above).
# You need to choose a lower logging level to get INFO/DEBUG traces
logging.basicConfig(level=logging.INFO)
envs = s.run()
# Uncomment this to output the simulation to a YAML file
# s.dump_yaml('simulation.yaml')
for agent in envs[0].agents:
print(agent.times_run)

View File

@ -170,6 +170,6 @@ class Police(FSM):
if __name__ == "__main__":
from soil import simulation
from soil import run_from_config
simulation.run_from_config("pubcrawl.yml", dry_run=True, dump=None, parallel=False)
run_from_config("pubcrawl.yml", dry_run=True, dump=None, parallel=False)

View File

@ -5,6 +5,8 @@ import math
class RabbitEnv(Environment):
prob_death = 1e-100
@property
def num_rabbits(self):
return self.count_agents(agent_class=Rabbit)
@ -129,7 +131,7 @@ class RandomAccident(BaseAgent):
if not rabbits_alive:
return self.die()
prob_death = self.model.get("prob_death", 1e-100) * math.floor(
prob_death = self.model.prob_death * math.floor(
math.log10(max(1, rabbits_alive))
)
self.debug("Killing some rabbits with prob={}!".format(prob_death))

View File

@ -31,11 +31,11 @@ class MyAgent(agents.FSM):
s = Simulation(
name="Programmatic",
network_agents=[{"agent_class": MyAgent, "id": 0}],
topology={"nodes": [{"id": 0}], "links": []},
model_params={
'agents': [{'agent_class': MyAgent}],
},
num_trials=1,
max_time=100,
agent_class=MyAgent,
dry_run=True,
)

View File

@ -108,14 +108,14 @@ class TerroristSpreadModel(FSM, Geo):
return
return self.leader
def ego_search(self, steps=1, center=False, node=None, **kwargs):
def ego_search(self, steps=1, center=False, agent=None, **kwargs):
"""Get a list of nodes in the ego network of *node* of radius *steps*"""
node = as_node(node if node is not None else self)
node = agent.node
G = self.subgraph(**kwargs)
return nx.ego_graph(G, node, center=center, radius=steps).nodes()
def degree(self, node, force=False):
node = as_node(node)
def degree(self, agent, force=False):
node = agent.node
if (
force
or (not hasattr(self.model, "_degree"))
@ -125,8 +125,8 @@ class TerroristSpreadModel(FSM, Geo):
self.model._last_step = self.now
return self.model._degree[node]
def betweenness(self, node, force=False):
node = as_node(node)
def betweenness(self, agent, force=False):
node = agent.node
if (
force
or (not hasattr(self.model, "_betweenness"))

View File

@ -216,13 +216,13 @@
" @soil.agents.state\n",
" def neutral(self):\n",
" r = random.random()\n",
" if self['has_tv'] and r < self.env['prob_tv_spread']:\n",
" if self['has_tv'] and r < self.model['prob_tv_spread']:\n",
" return self.infected\n",
" return\n",
" \n",
" @soil.agents.state\n",
" def infected(self):\n",
" prob_infect = self.env['prob_neighbor_spread']\n",
" prob_infect = self.model['prob_neighbor_spread']\n",
" for neighbor in self.get_neighboring_agents(state_id=self.neutral.id):\n",
" r = random.random()\n",
" if r < prob_infect:\n",
@ -271,11 +271,11 @@
"class NewsEnvironmentAgent(soil.agents.NetworkAgent):\n",
" def step(self):\n",
" if self.now == self['event_time']:\n",
" self.env['prob_tv_spread'] = 1\n",
" self.env['prob_neighbor_spread'] = 1\n",
" self.model['prob_tv_spread'] = 1\n",
" self.model['prob_neighbor_spread'] = 1\n",
" elif self.now > self['event_time']:\n",
" self.env['prob_tv_spread'] = self.env['prob_tv_spread'] * TV_FACTOR\n",
" self.env['prob_neighbor_spread'] = self.env['prob_neighbor_spread'] * NEIGHBOR_FACTOR"
" self.model['prob_tv_spread'] = self.model['prob_tv_spread'] * TV_FACTOR\n",
" self.model['prob_neighbor_spread'] = self.model['prob_neighbor_spread'] * NEIGHBOR_FACTOR"
]
},
{

View File

@ -1 +1 @@
0.30.0rc3
0.30.0rc4

View File

@ -1,6 +1,7 @@
from __future__ import annotations
import importlib
from importlib.resources import path
import sys
import os
import logging
@ -14,10 +15,12 @@ try:
except NameError:
basestring = str
from pathlib import Path
from .agents import *
from . import agents
from .simulation import *
from .environment import Environment, EventedEnvironment
from .datacollection import SoilCollector
from . import serialization
from .utils import logger
from .time import *
@ -35,8 +38,10 @@ def main(
**kwargs,
):
sim = None
if isinstance(cfg, Simulation):
sim = cfg
import argparse
from . import simulation

View File

@ -22,10 +22,10 @@ class BassModel(FSM):
else:
aware_neighbors = self.get_neighbors(state_id=self.aware.id)
num_neighbors_aware = len(aware_neighbors)
if self.prob((self["imitation_prob"] * num_neighbors_aware)):
if self.prob((self.imitation_prob * num_neighbors_aware)):
self.sentimentCorrelation = 1
return self.aware
@state
def aware(self):
self.die()
self.die()

View File

@ -1,118 +0,0 @@
from . import FSM, state, default_state
class BigMarketModel(FSM):
"""
Settings:
Names:
enterprises [Array]
tweet_probability_enterprises [Array]
Users:
tweet_probability_users
tweet_relevant_probability
tweet_probability_about [Array]
sentiment_about [Array]
"""
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.enterprises = self.env.environment_params["enterprises"]
self.type = ""
if self.id < len(self.enterprises): # Enterprises
self._set_state(self.enterprise.id)
self.type = "Enterprise"
self.tweet_probability = environment.environment_params[
"tweet_probability_enterprises"
][self.id]
else: # normal users
self.type = "User"
self._set_state(self.user.id)
self.tweet_probability = environment.environment_params[
"tweet_probability_users"
]
self.tweet_relevant_probability = environment.environment_params[
"tweet_relevant_probability"
]
self.tweet_probability_about = environment.environment_params[
"tweet_probability_about"
] # List
self.sentiment_about = environment.environment_params[
"sentiment_about"
] # List
@state
def enterprise(self):
if self.random.random() < self.tweet_probability: # Tweets
aware_neighbors = self.get_neighbors(
state_id=self.number_of_enterprises
) # Nodes neighbour users
for x in aware_neighbors:
if self.random.uniform(0, 10) < 5:
x.sentiment_about[self.id] += 0.1 # Increments for enterprise
else:
x.sentiment_about[self.id] -= 0.1 # Decrements for enterprise
# Establecemos limites
if x.sentiment_about[self.id] > 1:
x.sentiment_about[self.id] = 1
if x.sentiment_about[self.id] < -1:
x.sentiment_about[self.id] = -1
x.attrs[
"sentiment_enterprise_%s" % self.enterprises[self.id]
] = x.sentiment_about[self.id]
@state
def user(self):
if self.random.random() < self.tweet_probability: # Tweets
if (
self.random.random() < self.tweet_relevant_probability
): # Tweets something relevant
# Tweet probability per enterprise
for i in range(len(self.enterprises)):
random_num = self.random.random()
if random_num < self.tweet_probability_about[i]:
# The condition is fulfilled, sentiments are evaluated towards that enterprise
if self.sentiment_about[i] < 0:
# NEGATIVO
self.userTweets("negative", i)
elif self.sentiment_about[i] == 0:
# NEUTRO
pass
else:
# POSITIVO
self.userTweets("positive", i)
for i in range(
len(self.enterprises)
): # So that it never is set to 0 if there are not changes (logs)
self.attrs[
"sentiment_enterprise_%s" % self.enterprises[i]
] = self.sentiment_about[i]
def userTweets(self, sentiment, enterprise):
aware_neighbors = self.get_neighbors(
state_id=self.number_of_enterprises
) # Nodes neighbours users
for x in aware_neighbors:
if sentiment == "positive":
x.sentiment_about[enterprise] += 0.003
elif sentiment == "negative":
x.sentiment_about[enterprise] -= 0.003
else:
pass
# Establecemos limites
if x.sentiment_about[enterprise] > 1:
x.sentiment_about[enterprise] = 1
if x.sentiment_about[enterprise] < -1:
x.sentiment_about[enterprise] = -1
x.attrs[
"sentiment_enterprise_%s" % self.enterprises[enterprise]
] = x.sentiment_about[enterprise]

View File

@ -1,14 +1,14 @@
from scipy.spatial import cKDTree as KDTree
import networkx as nx
from . import NetworkAgent, as_node
from . import NetworkAgent
class Geo(NetworkAgent):
"""In this type of network, nodes have a "pos" attribute."""
def geo_search(self, radius, node=None, center=False, **kwargs):
def geo_search(self, radius, agent=None, center=False, **kwargs):
"""Get a list of nodes whose coordinates are closer than *radius* to *node*."""
node = as_node(node if node is not None else self)
node = agent.node
G = self.subgraph(**kwargs)
@ -18,4 +18,4 @@ class Geo(NetworkAgent):
nodes, coords = list(zip(*pos.items()))
kdtree = KDTree(coords) # Cannot provide generator.
indices = kdtree.query_ball_point(pos[node], radius)
return [nodes[i] for i in indices if center or (nodes[i] != node)]
return [nodes[i] for i in indices if center or (nodes[i] != node)]

View File

@ -1,7 +1,7 @@
from . import BaseAgent
from . import Agent, state, default_state
class IndependentCascadeModel(BaseAgent):
class IndependentCascadeModel(Agent):
"""
Settings:
innovation_prob
@ -9,42 +9,22 @@ class IndependentCascadeModel(BaseAgent):
imitation_prob
"""
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.innovation_prob = self.env.environment_params["innovation_prob"]
self.imitation_prob = self.env.environment_params["imitation_prob"]
self.state["time_awareness"] = 0
self.state["sentimentCorrelation"] = 0
time_awareness = 0
sentimentCorrelation = 0
def step(self):
self.behaviour()
# Outside effects
@default_state
@state
def outside(self):
if self.prob(self.model.innovation_prob):
self.sentimentCorrelation = 1
self.time_awareness = self.model.now # To know when they have been infected
return self.imitate
def behaviour(self):
aware_neighbors_1_time_step = []
# Outside effects
if self.prob(self.innovation_prob):
if self.state["id"] == 0:
self.state["id"] = 1
self.state["sentimentCorrelation"] = 1
self.state[
"time_awareness"
] = self.env.now # To know when they have been infected
else:
pass
@state
def imitate(self):
aware_neighbors = self.get_neighbors(state_id=1, time_awareness=self.now-1)
return
# Imitation effects
if self.state["id"] == 0:
aware_neighbors = self.get_neighbors(state_id=1)
for x in aware_neighbors:
if x.state["time_awareness"] == (self.env.now - 1):
aware_neighbors_1_time_step.append(x)
num_neighbors_aware = len(aware_neighbors_1_time_step)
if self.prob(self.imitation_prob * num_neighbors_aware):
self.state["id"] = 1
self.state["sentimentCorrelation"] = 1
else:
pass
return
if self.prob(self.model.imitation_prob * len(aware_neighbors)):
self.sentimentCorrelation = 1
return self.outside

View File

@ -1,270 +0,0 @@
import numpy as np
from . import BaseAgent
class SpreadModelM2(BaseAgent):
"""
Settings:
prob_neutral_making_denier
prob_infect
prob_cured_healing_infected
prob_cured_vaccinate_neutral
prob_vaccinated_healing_infected
prob_vaccinated_vaccinate_neutral
prob_generate_anti_rumor
"""
def __init__(self, model=None, unique_id=0, state=()):
super().__init__(model=environment, unique_id=unique_id, state=state)
# Use a single generator with the same seed as `self.random`
random = np.random.default_rng(seed=self._seed)
self.prob_neutral_making_denier = random.normal(
environment.environment_params["prob_neutral_making_denier"],
environment.environment_params["standard_variance"],
)
self.prob_infect = random.normal(
environment.environment_params["prob_infect"],
environment.environment_params["standard_variance"],
)
self.prob_cured_healing_infected = random.normal(
environment.environment_params["prob_cured_healing_infected"],
environment.environment_params["standard_variance"],
)
self.prob_cured_vaccinate_neutral = random.normal(
environment.environment_params["prob_cured_vaccinate_neutral"],
environment.environment_params["standard_variance"],
)
self.prob_vaccinated_healing_infected = random.normal(
environment.environment_params["prob_vaccinated_healing_infected"],
environment.environment_params["standard_variance"],
)
self.prob_vaccinated_vaccinate_neutral = random.normal(
environment.environment_params["prob_vaccinated_vaccinate_neutral"],
environment.environment_params["standard_variance"],
)
self.prob_generate_anti_rumor = random.normal(
environment.environment_params["prob_generate_anti_rumor"],
environment.environment_params["standard_variance"],
)
def step(self):
if self.state["id"] == 0: # Neutral
self.neutral_behaviour()
elif self.state["id"] == 1: # Infected
self.infected_behaviour()
elif self.state["id"] == 2: # Cured
self.cured_behaviour()
elif self.state["id"] == 3: # Vaccinated
self.vaccinated_behaviour()
def neutral_behaviour(self):
# Infected
infected_neighbors = self.get_neighbors(state_id=1)
if len(infected_neighbors) > 0:
if self.prob(self.prob_neutral_making_denier):
self.state["id"] = 3 # Vaccinated making denier
def infected_behaviour(self):
# Neutral
neutral_neighbors = self.get_neighbors(state_id=0)
for neighbor in neutral_neighbors:
if self.prob(self.prob_infect):
neighbor.state["id"] = 1 # Infected
def cured_behaviour(self):
# Vaccinate
neutral_neighbors = self.get_neighbors(state_id=0)
for neighbor in neutral_neighbors:
if self.prob(self.prob_cured_vaccinate_neutral):
neighbor.state["id"] = 3 # Vaccinated
# Cure
infected_neighbors = self.get_neighbors(state_id=1)
for neighbor in infected_neighbors:
if self.prob(self.prob_cured_healing_infected):
neighbor.state["id"] = 2 # Cured
def vaccinated_behaviour(self):
# Cure
infected_neighbors = self.get_neighbors(state_id=1)
for neighbor in infected_neighbors:
if self.prob(self.prob_cured_healing_infected):
neighbor.state["id"] = 2 # Cured
# Vaccinate
neutral_neighbors = self.get_neighbors(state_id=0)
for neighbor in neutral_neighbors:
if self.prob(self.prob_cured_vaccinate_neutral):
neighbor.state["id"] = 3 # Vaccinated
# Generate anti-rumor
infected_neighbors_2 = self.get_neighbors(state_id=1)
for neighbor in infected_neighbors_2:
if self.prob(self.prob_generate_anti_rumor):
neighbor.state["id"] = 2 # Cured
class ControlModelM2(BaseAgent):
"""
Settings:
prob_neutral_making_denier
prob_infect
prob_cured_healing_infected
prob_cured_vaccinate_neutral
prob_vaccinated_healing_infected
prob_vaccinated_vaccinate_neutral
prob_generate_anti_rumor
"""
def __init__(self, model=None, unique_id=0, state=()):
super().__init__(model=environment, unique_id=unique_id, state=state)
self.prob_neutral_making_denier = np.random.normal(
environment.environment_params["prob_neutral_making_denier"],
environment.environment_params["standard_variance"],
)
self.prob_infect = np.random.normal(
environment.environment_params["prob_infect"],
environment.environment_params["standard_variance"],
)
self.prob_cured_healing_infected = np.random.normal(
environment.environment_params["prob_cured_healing_infected"],
environment.environment_params["standard_variance"],
)
self.prob_cured_vaccinate_neutral = np.random.normal(
environment.environment_params["prob_cured_vaccinate_neutral"],
environment.environment_params["standard_variance"],
)
self.prob_vaccinated_healing_infected = np.random.normal(
environment.environment_params["prob_vaccinated_healing_infected"],
environment.environment_params["standard_variance"],
)
self.prob_vaccinated_vaccinate_neutral = np.random.normal(
environment.environment_params["prob_vaccinated_vaccinate_neutral"],
environment.environment_params["standard_variance"],
)
self.prob_generate_anti_rumor = np.random.normal(
environment.environment_params["prob_generate_anti_rumor"],
environment.environment_params["standard_variance"],
)
def step(self):
if self.state["id"] == 0: # Neutral
self.neutral_behaviour()
elif self.state["id"] == 1: # Infected
self.infected_behaviour()
elif self.state["id"] == 2: # Cured
self.cured_behaviour()
elif self.state["id"] == 3: # Vaccinated
self.vaccinated_behaviour()
elif self.state["id"] == 4: # Beacon-off
self.beacon_off_behaviour()
elif self.state["id"] == 5: # Beacon-on
self.beacon_on_behaviour()
def neutral_behaviour(self):
self.state["visible"] = False
# Infected
infected_neighbors = self.get_neighbors(state_id=1)
if len(infected_neighbors) > 0:
if self.random(self.prob_neutral_making_denier):
self.state["id"] = 3 # Vaccinated making denier
def infected_behaviour(self):
# Neutral
neutral_neighbors = self.get_neighbors(state_id=0)
for neighbor in neutral_neighbors:
if self.prob(self.prob_infect):
neighbor.state["id"] = 1 # Infected
self.state["visible"] = False
def cured_behaviour(self):
self.state["visible"] = True
# Vaccinate
neutral_neighbors = self.get_neighbors(state_id=0)
for neighbor in neutral_neighbors:
if self.prob(self.prob_cured_vaccinate_neutral):
neighbor.state["id"] = 3 # Vaccinated
# Cure
infected_neighbors = self.get_neighbors(state_id=1)
for neighbor in infected_neighbors:
if self.prob(self.prob_cured_healing_infected):
neighbor.state["id"] = 2 # Cured
def vaccinated_behaviour(self):
self.state["visible"] = True
# Cure
infected_neighbors = self.get_neighbors(state_id=1)
for neighbor in infected_neighbors:
if self.prob(self.prob_cured_healing_infected):
neighbor.state["id"] = 2 # Cured
# Vaccinate
neutral_neighbors = self.get_neighbors(state_id=0)
for neighbor in neutral_neighbors:
if self.prob(self.prob_cured_vaccinate_neutral):
neighbor.state["id"] = 3 # Vaccinated
# Generate anti-rumor
infected_neighbors_2 = self.get_neighbors(state_id=1)
for neighbor in infected_neighbors_2:
if self.prob(self.prob_generate_anti_rumor):
neighbor.state["id"] = 2 # Cured
def beacon_off_behaviour(self):
self.state["visible"] = False
infected_neighbors = self.get_neighbors(state_id=1)
if len(infected_neighbors) > 0:
self.state["id"] == 5 # Beacon on
def beacon_on_behaviour(self):
self.state["visible"] = False
# Cure (M2 feature added)
infected_neighbors = self.get_neighbors(state_id=1)
for neighbor in infected_neighbors:
if self.prob(self.prob_generate_anti_rumor):
neighbor.state["id"] = 2 # Cured
neutral_neighbors_infected = neighbor.get_neighbors(state_id=0)
for neighbor in neutral_neighbors_infected:
if self.prob(self.prob_generate_anti_rumor):
neighbor.state["id"] = 3 # Vaccinated
infected_neighbors_infected = neighbor.get_neighbors(state_id=1)
for neighbor in infected_neighbors_infected:
if self.prob(self.prob_generate_anti_rumor):
neighbor.state["id"] = 2 # Cured
# Vaccinate
neutral_neighbors = self.get_neighbors(state_id=0)
for neighbor in neutral_neighbors:
if self.prob(self.prob_cured_vaccinate_neutral):
neighbor.state["id"] = 3 # Vaccinated

View File

@ -1,8 +1,9 @@
import numpy as np
from . import FSM, state
from hashlib import sha512
from . import Agent, state, default_state
class SISaModel(FSM):
class SISaModel(Agent):
"""
Settings:
neutral_discontent_spon_prob
@ -28,38 +29,45 @@ class SISaModel(FSM):
standard_variance
"""
def __init__(self, environment, unique_id=0, state=()):
super().__init__(model=environment, unique_id=unique_id, state=state)
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
random = np.random.default_rng(seed=self._seed)
seed = self.model._seed
if isinstance(seed, (str, bytes, bytearray)):
if isinstance(seed, str):
seed = seed.encode()
seed = int.from_bytes(seed + sha512(seed).digest(), 'big')
random = np.random.default_rng(seed=seed)
self.neutral_discontent_spon_prob = random.normal(
self.env["neutral_discontent_spon_prob"], self.env["standard_variance"]
self.model.neutral_discontent_spon_prob, self.model.standard_variance
)
self.neutral_discontent_infected_prob = random.normal(
self.env["neutral_discontent_infected_prob"], self.env["standard_variance"]
self.model.neutral_discontent_infected_prob, self.model.standard_variance
)
self.neutral_content_spon_prob = random.normal(
self.env["neutral_content_spon_prob"], self.env["standard_variance"]
self.model.neutral_content_spon_prob, self.model.standard_variance
)
self.neutral_content_infected_prob = random.normal(
self.env["neutral_content_infected_prob"], self.env["standard_variance"]
self.model.neutral_content_infected_prob, self.model.standard_variance
)
self.discontent_neutral = random.normal(
self.env["discontent_neutral"], self.env["standard_variance"]
self.model.discontent_neutral, self.model.standard_variance
)
self.discontent_content = random.normal(
self.env["discontent_content"], self.env["variance_d_c"]
self.model.discontent_content, self.model.variance_d_c
)
self.content_discontent = random.normal(
self.env["content_discontent"], self.env["variance_c_d"]
self.model.content_discontent, self.model.variance_c_d
)
self.content_neutral = random.normal(
self.env["content_neutral"], self.env["standard_variance"]
self.model.discontent_neutral, self.model.standard_variance
)
@default_state
@state
def neutral(self):
# Spontaneous effects
@ -70,10 +78,10 @@ class SISaModel(FSM):
# Infected
discontent_neighbors = self.count_neighbors(state_id=self.discontent)
if self.prob(scontent_neighbors * self.neutral_discontent_infected_prob):
if self.prob(discontent_neighbors * self.neutral_discontent_infected_prob):
return self.discontent
content_neighbors = self.count_neighbors(state_id=self.content.id)
if self.prob(s * self.neutral_content_infected_prob):
if self.prob(content_neighbors * self.neutral_content_infected_prob):
return self.content
return self.neutral
@ -85,7 +93,7 @@ class SISaModel(FSM):
# Superinfected
content_neighbors = self.count_neighbors(state_id=self.content.id)
if self.prob(s * self.discontent_content):
if self.prob(content_neighbors * self.discontent_content):
return self.content
return self.discontent
@ -97,6 +105,6 @@ class SISaModel(FSM):
# Superinfected
discontent_neighbors = self.count_neighbors(state_id=self.discontent.id)
if self.prob(scontent_neighbors * self.content_discontent):
if self.prob(discontent_neighbors * self.content_discontent):
self.discontent
return self.content

View File

@ -1,115 +0,0 @@
from . import BaseAgent
class SentimentCorrelationModel(BaseAgent):
"""
Settings:
outside_effects_prob
anger_prob
joy_prob
sadness_prob
disgust_prob
"""
def __init__(self, environment, unique_id=0, state=()):
super().__init__(model=environment, unique_id=unique_id, state=state)
self.outside_effects_prob = environment.environment_params[
"outside_effects_prob"
]
self.anger_prob = environment.environment_params["anger_prob"]
self.joy_prob = environment.environment_params["joy_prob"]
self.sadness_prob = environment.environment_params["sadness_prob"]
self.disgust_prob = environment.environment_params["disgust_prob"]
self.state["time_awareness"] = []
for i in range(4): # In this model we have 4 sentiments
self.state["time_awareness"].append(
0
) # 0-> Anger, 1-> joy, 2->sadness, 3 -> disgust
self.state["sentimentCorrelation"] = 0
def step(self):
self.behaviour()
def behaviour(self):
angry_neighbors_1_time_step = []
joyful_neighbors_1_time_step = []
sad_neighbors_1_time_step = []
disgusted_neighbors_1_time_step = []
angry_neighbors = self.get_neighbors(state_id=1)
for x in angry_neighbors:
if x.state["time_awareness"][0] > (self.env.now - 500):
angry_neighbors_1_time_step.append(x)
num_neighbors_angry = len(angry_neighbors_1_time_step)
joyful_neighbors = self.get_neighbors(state_id=2)
for x in joyful_neighbors:
if x.state["time_awareness"][1] > (self.env.now - 500):
joyful_neighbors_1_time_step.append(x)
num_neighbors_joyful = len(joyful_neighbors_1_time_step)
sad_neighbors = self.get_neighbors(state_id=3)
for x in sad_neighbors:
if x.state["time_awareness"][2] > (self.env.now - 500):
sad_neighbors_1_time_step.append(x)
num_neighbors_sad = len(sad_neighbors_1_time_step)
disgusted_neighbors = self.get_neighbors(state_id=4)
for x in disgusted_neighbors:
if x.state["time_awareness"][3] > (self.env.now - 500):
disgusted_neighbors_1_time_step.append(x)
num_neighbors_disgusted = len(disgusted_neighbors_1_time_step)
anger_prob = self.anger_prob + (
len(angry_neighbors_1_time_step) * self.anger_prob
)
joy_prob = self.joy_prob + (len(joyful_neighbors_1_time_step) * self.joy_prob)
sadness_prob = self.sadness_prob + (
len(sad_neighbors_1_time_step) * self.sadness_prob
)
disgust_prob = self.disgust_prob + (
len(disgusted_neighbors_1_time_step) * self.disgust_prob
)
outside_effects_prob = self.outside_effects_prob
num = self.random.random()
if num < outside_effects_prob:
self.state["id"] = self.random.randint(1, 4)
self.state["sentimentCorrelation"] = self.state[
"id"
] # It is stored when it has been infected for the dynamic network
self.state["time_awareness"][self.state["id"] - 1] = self.env.now
self.state["sentiment"] = self.state["id"]
if num < anger_prob:
self.state["id"] = 1
self.state["sentimentCorrelation"] = 1
self.state["time_awareness"][self.state["id"] - 1] = self.env.now
elif num < joy_prob + anger_prob and num > anger_prob:
self.state["id"] = 2
self.state["sentimentCorrelation"] = 2
self.state["time_awareness"][self.state["id"] - 1] = self.env.now
elif num < sadness_prob + anger_prob + joy_prob and num > joy_prob + anger_prob:
self.state["id"] = 3
self.state["sentimentCorrelation"] = 3
self.state["time_awareness"][self.state["id"] - 1] = self.env.now
elif (
num < disgust_prob + sadness_prob + anger_prob + joy_prob
and num > sadness_prob + anger_prob + joy_prob
):
self.state["id"] = 4
self.state["sentimentCorrelation"] = 4
self.state["time_awareness"][self.state["id"] - 1] = self.env.now
self.state["sentiment"] = self.state["id"]

View File

@ -555,9 +555,9 @@ def _from_fixed(
def _from_distro(
distro: List[config.AgentDistro],
n: int,
topology: str,
default: config.SingleAgentConfig,
random,
topology: str = None
) -> List[Dict[str, Any]]:
agents = []
@ -621,19 +621,18 @@ def _from_distro(
from .network_agents import *
from .fsm import *
from .evented import *
class Agent(NetworkAgent, FSM, EventedAgent):
"""Default agent class, has both network and event capabilities"""
from .BassModel import *
from .BigMarketModel import *
from .IndependentCascadeModel import *
from .ModelM2 import *
from .SentimentCorrelationModel import *
from .SISaModel import *
from .CounterModel import *
class Agent(NetworkAgent, EventedAgent):
"""Default agent class, has both network and event capabilities"""
try:
import scipy
from .Geo import Geo

View File

@ -14,8 +14,11 @@ class NetworkAgent(BaseAgent):
def count_neighbors(self, state_id=None, **kwargs):
return len(self.get_neighbors(state_id=state_id, **kwargs))
def iter_neighbors(self, **kwargs):
return self.iter_agents(limit_neighbors=True, **kwargs)
def get_neighbors(self, **kwargs):
return list(self.iter_agents(limit_neighbors=True, **kwargs))
return list(self.iter_neighbors())
@property
def node(self):

View File

@ -37,13 +37,8 @@ class Topology(BaseModel):
links: List[Edge]
class NetParams(BaseModel, extra=Extra.allow):
generator: Union[Callable, str]
n: int
class NetConfig(BaseModel):
params: Optional[NetParams]
params: Optional[Dict[str, Any]]
fixed: Optional[Union[Topology, nx.Graph]]
path: Optional[str]
@ -135,9 +130,11 @@ class Config(BaseModel, extra=Extra.allow):
num_trials: int = 1
max_time: float = 100
max_steps: int = -1
num_processes: int = 1
interval: float = 1
seed: str = ""
dry_run: bool = False
skip_test: bool = False
model_class: Union[Type, str] = environment.Environment
model_params: Optional[Dict[str, Any]] = {}

View File

@ -1,6 +1,17 @@
from mesa import DataCollector as MDC
class SoilDataCollector(MDC):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
class SoilCollector(MDC):
def __init__(self, model_reporters=None, agent_reporters=None, tables=None, **kwargs):
model_reporters = model_reporters or {}
agent_reporters = agent_reporters or {}
tables = tables or {}
if 'agent_count' not in model_reporters:
model_reporters['agent_count'] = lambda m: m.schedule.get_agent_count()
if 'state_id' not in agent_reporters:
agent_reporters['agent_id'] = lambda agent: agent.get('state_id', None)
super().__init__(model_reporters=model_reporters,
agent_reporters=agent_reporters,
tables=tables,
**kwargs)

View File

@ -6,7 +6,7 @@ import math
import logging
import inspect
from typing import Any, Dict, Optional, Union
from typing import Any, Dict, Optional, Union, List
from collections import namedtuple
from time import time as current_time
from copy import deepcopy
@ -16,9 +16,8 @@ from networkx.readwrite import json_graph
import networkx as nx
from mesa import Model
from mesa.datacollection import DataCollector
from . import agents as agentmod, config, serialization, utils, time, network, events
from . import agents as agentmod, config, datacollection, serialization, utils, time, network, events
class BaseEnvironment(Model):
@ -42,7 +41,8 @@ class BaseEnvironment(Model):
dir_path=None,
interval=1,
agent_class=None,
agents: [tuple[type, Dict[str, Any]]] = {},
agents: List[tuple[type, Dict[str, Any]]] = {},
collector_class: type = datacollection.SoilCollector,
agent_reporters: Optional[Any] = None,
model_reporters: Optional[Any] = None,
tables: Optional[Any] = None,
@ -50,7 +50,6 @@ class BaseEnvironment(Model):
):
super().__init__(seed=seed)
self.env_params = env_params or {}
self.current_id = -1
@ -71,11 +70,14 @@ class BaseEnvironment(Model):
self.logger = utils.logger.getChild(self.id)
self.datacollector = DataCollector(
collector_class = serialization.deserialize(collector_class)
self.datacollector = collector_class(
model_reporters=model_reporters,
agent_reporters=agent_reporters,
tables=tables,
)
for (k, v) in env_params.items():
self[k] = v
def _agent_from_dict(self, agent):
"""
@ -89,7 +91,7 @@ class BaseEnvironment(Model):
return serialization.deserialize(cls)(unique_id=unique_id, model=self, **agent)
def init_agents(self, agents: Union[config.AgentConfig, [Dict[str, Any]]] = {}):
def init_agents(self, agents: Union[config.AgentConfig, List[Dict[str, Any]]] = {}):
"""
Initialize the agents in the model from either a `soil.config.AgentConfig` or a list of
dictionaries that each describes an agent.
@ -170,31 +172,41 @@ class BaseEnvironment(Model):
Advance one step in the simulation, and update the data collection and scheduler appropriately
"""
super().step()
self.logger.info(
f"--- Step: {self.schedule.steps:^5} - Time: {self.now:^5} ---"
)
# self.logger.info(
# "--- Step: {:^5} - Time: {now:^5} ---", steps=self.schedule.steps, now=self.now
# )
self.schedule.step()
self.datacollector.collect(self)
def __contains__(self, key):
return key in self.env_params
def get(self, key, default=None):
"""
Get the value of an environment attribute.
Return `default` if the value is not set.
"""
return self.env_params.get(key, default)
def __getitem__(self, key):
return self.env_params.get(key)
try:
return getattr(self, key)
except AttributeError:
raise KeyError(f"key {key} not found in environment")
def __delitem__(self, key):
return delattr(self, key)
def __contains__(self, key):
return hasattr(self, key)
def __setitem__(self, key, value):
return self.env_params.__setitem__(key, value)
setattr(self, key, value)
def __str__(self):
return str(self.env_params)
return str(dict(self))
def __len__(self):
return sum(1 for n in self.keys())
def __iter__(self):
return iter(self.agents())
def get(self, key, default=None):
return self[key] if key in self else default
def keys(self):
return (k for k in self.__dict__ if k[0] != "_")
class NetworkEnvironment(BaseEnvironment):
"""
@ -208,7 +220,12 @@ class NetworkEnvironment(BaseEnvironment):
agents = kwargs.pop("agents", None)
super().__init__(*args, agents=None, **kwargs)
self._set_topology(topology)
if topology is None:
topology = nx.Graph()
elif not isinstance(topology, nx.Graph):
topology = network.from_config(topology, dir_path=self.dir_path)
self.G = topology
self.init_agents(agents)
@ -216,14 +233,14 @@ class NetworkEnvironment(BaseEnvironment):
"""Initialize the agents from a"""
super().init_agents(*args, **kwargs)
for agent in self.schedule._agents.values():
if hasattr(agent, "node_id"):
self._init_node(agent)
self._init_node(agent)
def _init_node(self, agent):
"""
Make sure the node for a given agent has the proper attributes.
"""
self.G.nodes[agent.node_id]["agent"] = agent
if hasattr(agent, "node_id"):
self.G.nodes[agent.node_id]["agent"] = agent
def _agent_dict_from_config(self, cfg):
return agentmod.from_config(cfg, topology=self.G, random=self.random)
@ -244,6 +261,7 @@ class NetworkEnvironment(BaseEnvironment):
agent["unique_id"] = unique_id
agent["topology"] = self.G
node_attrs = self.G.nodes[node_id]
node_attrs.pop('agent', None)
node_attrs.update(agent)
agent = node_attrs
@ -252,17 +270,9 @@ class NetworkEnvironment(BaseEnvironment):
return a
def _set_topology(self, cfg=None, dir_path=None):
if cfg is None:
cfg = nx.Graph()
elif not isinstance(cfg, nx.Graph):
cfg = network.from_config(cfg, dir_path=dir_path or self.dir_path)
self.G = cfg
@property
def network_agents(self):
for a in self.schedule._agents:
for a in self.schedule._agents.values():
if isinstance(a, agentmod.NetworkAgent):
yield a
@ -294,7 +304,7 @@ class NetworkEnvironment(BaseEnvironment):
def add_agent(self, *args, **kwargs):
a = super().add_agent(*args, **kwargs)
if "node_id" in a:
if hasattr(a, "node_id"):
assert self.G.nodes[a.node_id]["agent"] == a
return a
@ -309,7 +319,7 @@ class NetworkEnvironment(BaseEnvironment):
if "agent" in node:
continue
a_class = self.random.choices(agent_class, weights)[0]
self.add_agent(node_id=node_id, agent_class=a_class, **agent_params)
self.add_agent(node_id=node_id, topology=self.G, agent_class=a_class, **agent_params)
class EventedEnvironment(BaseEnvironment):

View File

@ -104,17 +104,15 @@ def get_dc_dfs(dc, trial_id=None):
yield from dfs.items()
class default(Exporter):
"""Default exporter. Writes sqlite results, as well as the simulation YAML"""
class SQLite(Exporter):
"""Writes sqlite results"""
def sim_start(self):
if self.dry_run:
logger.info("NOT dumping results")
return
logger.info("Dumping results to %s", self.outdir)
with self.output(self.simulation.name + ".dumped.yml") as f:
f.write(self.simulation.to_yaml())
self.dbpath = os.path.join(self.outdir, f"{self.simulation.name}.sqlite")
logger.info("Dumping results to %s", self.dbpath)
try_backup(self.dbpath, remove=True)
def trial_end(self, env):
@ -131,7 +129,6 @@ class default(Exporter):
for (t, df) in self.get_dfs(env):
df.to_sql(t, con=engine, if_exists="append")
class csv(Exporter):
"""Export the state of each environment (and its agents) in a separate CSV file"""
@ -199,15 +196,61 @@ class summary(Exporter):
"""Print a summary of each trial to sys.stdout"""
def trial_end(self, env):
msg = ""
for (t, df) in self.get_dfs(env):
if not len(df):
continue
msg = indent(str(df.describe()), " ")
logger.info(
dedent(
f"""
tabs = "\t" * 2
description = indent(str(df.describe()), tabs)
last_line = indent(str(df.iloc[-1:]), tabs)
# value_counts = indent(str(df.value_counts()), tabs)
value_counts = indent(str(df.apply(lambda x: x.value_counts()).T.stack()), tabs)
msg += dedent("""
Dataframe {t}:
"""
)
+ msg
)
Last line: :
{last_line}
Description:
{description}
Value counts:
{value_counts}
""").format(**locals())
logger.info(msg)
class YAML(Exporter):
"""Writes the configuration of the simulation to a YAML file"""
def sim_start(self):
if self.dry_run:
logger.info("NOT dumping results")
return
with self.output(self.simulation.name + ".dumped.yml") as f:
logger.info(f"Dumping simulation configuration to {self.outdir}")
f.write(self.simulation.to_yaml())
class default(Exporter):
"""Default exporter. Writes sqlite results, as well as the simulation YAML"""
def __init__(self, *args, exporter_cls=[], **kwargs):
exporter_cls = exporter_cls or [YAML, SQLite, summary]
self.inner = [cls(*args, **kwargs) for cls in exporter_cls]
def sim_start(self):
for exporter in self.inner:
exporter.sim_start()
def sim_end(self):
for exporter in self.inner:
exporter.sim_end()
def trial_start(self, env):
for exporter in self.inner:
exporter.trial_start(env)
def trial_end(self, env):
for exporter in self.inner:
exporter.trial_end(env)

View File

@ -30,7 +30,7 @@ def from_config(cfg: config.NetConfig, dir_path: str = None):
return method(path, **kwargs)
if cfg.params:
net_args = cfg.params.dict()
net_args = dict(cfg.params)
net_gen = net_args.pop("generator")
if dir_path not in sys.path:

View File

@ -146,7 +146,10 @@ def serialize(v, known_modules=KNOWN_MODULES):
def serialize_dict(d, known_modules=KNOWN_MODULES):
d = dict(d)
try:
d = dict(d)
except (ValueError, TypeError) as ex:
return serialize(d)[0]
for (k, v) in d.items():
if isinstance(v, dict):
d[k] = serialize_dict(v, known_modules=known_modules)

View File

@ -48,12 +48,17 @@ class Simulation:
max_steps: int = -1
interval: int = 1
num_trials: int = 1
parallel: Optional[bool] = None
exporters: Optional[List[str]] = field(default_factory=list)
num_processes: Optional[int] = 1
parallel: Optional[bool] = False
exporters: Optional[List[str]] = field(default_factory=lambda: [exporters.default])
model_reporters: Optional[Dict[str, Any]] = field(default_factory=dict)
agent_reporters: Optional[Dict[str, Any]] = field(default_factory=dict)
tables: Optional[Dict[str, Any]] = field(default_factory=dict)
outdir: Optional[str] = None
exporter_params: Optional[Dict[str, Any]] = field(default_factory=dict)
dry_run: bool = False
extra: Dict[str, Any] = field(default_factory=dict)
skip_test: Optional[bool] = False
@classmethod
def from_dict(cls, env, **kwargs):
@ -89,7 +94,7 @@ class Simulation:
def run_gen(
self,
parallel=False,
num_processes=1,
dry_run=None,
exporters=None,
outdir=None,
@ -128,7 +133,7 @@ class Simulation:
for env in utils.run_parallel(
func=self.run_trial,
iterable=range(int(self.num_trials)),
parallel=parallel,
num_processes=num_processes,
log_level=log_level,
**kwargs,
):
@ -158,8 +163,12 @@ class Simulation:
params.update(model_params)
params.update(kwargs)
agent_reporters = deserialize_reporters(params.pop("agent_reporters", {}))
model_reporters = deserialize_reporters(params.pop("model_reporters", {}))
agent_reporters = self.agent_reporters.copy()
agent_reporters.update(deserialize_reporters(params.pop("agent_reporters", {})))
model_reporters = self.model_reporters.copy()
model_reporters.update(deserialize_reporters(params.pop("model_reporters", {})))
tables = self.tables.copy()
tables.update(deserialize_reporters(params.pop("tables", {})))
env = serialization.deserialize(self.model_class)
return env(
@ -168,6 +177,7 @@ class Simulation:
dir_path=self.dir_path,
agent_reporters=agent_reporters,
model_reporters=model_reporters,
tables=tables,
**params,
)
@ -234,12 +244,7 @@ Model stats:
def to_dict(self):
d = asdict(self)
if not isinstance(d["model_class"], str):
d["model_class"] = serialization.name(d["model_class"])
d["model_params"] = serialization.serialize_dict(d["model_params"])
d["dir_path"] = str(d["dir_path"])
d["version"] = "2"
return d
return serialization.serialize_dict(d)
def to_yaml(self):
return yaml.dump(self.to_dict())
@ -261,6 +266,24 @@ def from_config(conf_or_path):
raise AttributeError("Provide only one configuration")
return lst[0]
def iter_from_py(pyfile, module_name='custom_simulation'):
"""Try to load every Simulation instance in a given Python file"""
import importlib
import inspect
spec = importlib.util.spec_from_file_location(module_name, pyfile)
module = importlib.util.module_from_spec(spec)
sys.modules[module_name] = module
spec.loader.exec_module(module)
# import pdb;pdb.set_trace()
for (_name, sim) in inspect.getmembers(module, lambda x: isinstance(x, Simulation)):
yield sim
del sys.modules[module_name]
def from_py(pyfile):
return next(iter_from_py(pyfile))
def run_from_config(*configs, **kwargs):
for sim in iter_from_config(*configs):

View File

@ -133,10 +133,10 @@ class TimedActivation(BaseScheduler):
"""
self.logger.debug(f"Simulation step {self.time}")
if not self.model.running:
if not self.model.running or self.time == INFINITY:
return
self.logger.debug(f"Queue length: {len(self._queue)}")
self.logger.debug("Queue length: {ql}", ql=len(self._queue))
while self._queue:
((when, _id, cond), agent) = self._queue[0]
@ -156,7 +156,7 @@ class TimedActivation(BaseScheduler):
agent._last_return = None
agent._last_except = None
self.logger.debug(f"Stepping agent {agent}")
self.logger.debug("Stepping agent {agent}", agent=agent)
self._next.pop(agent.unique_id, None)
try:
@ -187,6 +187,7 @@ class TimedActivation(BaseScheduler):
return self.time
next_time = self._queue[0][0][0]
if next_time < self.time:
raise Exception(
f"An agent has been scheduled for a time in the past, there is probably an error ({when} < {self.time})"

View File

@ -5,7 +5,7 @@ import traceback
from functools import partial
from shutil import copyfile, move
from multiprocessing import Pool
from multiprocessing import Pool, cpu_count
from contextlib import contextmanager
@ -24,7 +24,7 @@ consoleHandler = logging.StreamHandler()
consoleHandler.setFormatter(logFormatter)
logging.basicConfig(
level=logging.INFO,
level=logging.DEBUG,
handlers=[
consoleHandler,
],
@ -140,9 +140,11 @@ def run_and_return_exceptions(func, *args, **kwargs):
return ex
def run_parallel(func, iterable, parallel=False, **kwargs):
if parallel and not os.environ.get("SOIL_DEBUG", None):
p = Pool()
def run_parallel(func, iterable, num_processes=1, **kwargs):
if num_processes > 1 and not os.environ.get("SOIL_DEBUG", None):
if num_processes < 1:
num_processes = cpu_count() - num_processes
p = Pool(processes=num_processes)
wrapped_func = partial(run_and_return_exceptions, func, **kwargs)
for i in p.imap_unordered(wrapped_func, iterable):
if isinstance(i, Exception):

View File

@ -99,7 +99,7 @@ class TestConfig(TestCase):
with utils.timer("serializing"):
serial = s.to_yaml()
with utils.timer("recovering"):
recovered = yaml.load(serial, Loader=yaml.SafeLoader)
recovered = yaml.load(serial, Loader=yaml.FullLoader)
for (k, v) in config.items():
assert recovered[k] == v
@ -109,24 +109,23 @@ def make_example_test(path, cfg):
root = os.getcwd()
print(path)
s = simulation.from_config(cfg)
# for s in simulation.all_from_config(path):
# iterations = s.config.max_time * s.config.num_trials
# if iterations > 1000:
# s.config.max_time = 100
# s.config.num_trials = 1
# if config.get('skip_test', False) and not FORCE_TESTS:
# self.skipTest('Example ignored.')
# envs = s.run_simulation(dry_run=True)
# assert envs
# for env in envs:
# assert env
# try:
# n = config['network_params']['n']
# assert len(list(env.network_agents)) == n
# assert env.now > 0 # It has run
# assert env.now <= config['max_time'] # But not further than allowed
# except KeyError:
# pass
iterations = s.max_time * s.num_trials
if iterations > 1000:
s.max_time = 100
s.num_trials = 1
if cfg.skip_test and not FORCE_TESTS:
self.skipTest('Example ignored.')
envs = s.run_simulation(dry_run=True)
assert envs
for env in envs:
assert env
try:
n = cfg.model_params['topology']['params']['n']
assert len(list(env.network_agents)) == n
assert env.now > 0 # It has run
assert env.now <= cfg.max_time # But not further than allowed
except KeyError:
pass
return wrapped

View File

@ -1,8 +1,9 @@
from unittest import TestCase
import os
from os.path import join
from glob import glob
from soil import serialization, simulation, config
from soil import simulation, config
ROOT = os.path.abspath(os.path.dirname(__file__))
EXAMPLES = join(ROOT, "..", "examples")
@ -14,44 +15,49 @@ class TestExamples(TestCase):
pass
def make_example_test(path, cfg):
def get_test_for_sim(sim, path):
root = os.getcwd()
iterations = sim.max_steps * sim.num_trials
if iterations < 0 or iterations > 1000:
sim.max_steps = 100
sim.num_trials = 1
def wrapped(self):
root = os.getcwd()
for s in simulation.iter_from_config(cfg):
iterations = s.max_steps * s.num_trials
if iterations < 0 or iterations > 1000:
s.max_steps = 100
s.num_trials = 1
assert isinstance(cfg, config.Config)
if getattr(cfg, "skip_test", False) and not FORCE_TESTS:
self.skipTest("Example ignored.")
envs = s.run_simulation(dry_run=True)
assert envs
for env in envs:
assert env
try:
n = cfg.model_params["network_params"]["n"]
assert len(list(env.network_agents)) == n
except KeyError:
pass
assert env.schedule.steps > 0 # It has run
assert env.schedule.steps <= s.max_steps # But not further than allowed
envs = sim.run_simulation(dry_run=True)
assert envs
for env in envs:
assert env
try:
n = sim.model_params["network_params"]["n"]
assert len(list(env.network_agents)) == n
except KeyError:
pass
assert env.schedule.steps > 0 # It has run
assert env.schedule.steps <= sim.max_steps # But not further than allowed
return wrapped
def add_example_tests():
for cfg, path in serialization.load_files(
join(EXAMPLES, "**", "*.yml"),
):
sim_paths = []
for path in glob(join(EXAMPLES, '**', '*.yml')):
if "soil_output" in path:
continue
p = make_example_test(path=path, cfg=config.Config.from_raw(cfg))
for sim in simulation.iter_from_config(path):
sim_paths.append((sim, path))
for path in glob(join(EXAMPLES, '**', '*.py')):
for sim in simulation.iter_from_py(path):
sim_paths.append((sim, path))
for (sim, path) in sim_paths:
if sim.skip_test and not FORCE_TESTS:
continue
test_case = get_test_for_sim(sim, path)
fname = os.path.basename(path)
p.__name__ = "test_example_file_%s" % fname
p.__doc__ = "%s should be a valid configuration" % fname
setattr(TestExamples, p.__name__, p)
del p
test_case.__name__ = "test_example_file_%s" % fname
test_case.__doc__ = "%s should be a valid configuration" % fname
setattr(TestExamples, test_case.__name__, test_case)
del test_case
add_example_tests()