1
0
mirror of https://github.com/gsi-upm/soil synced 2025-09-13 19:52:20 +00:00

Compare commits

...

7 Commits

Author SHA1 Message Date
J. Fernando Sánchez
0a9c6d8b19 WIP: removed stats 2022-09-16 18:14:16 +02:00
J. Fernando Sánchez
3dc56892c1 WIP: working config 2022-09-15 19:27:17 +02:00
J. Fernando Sánchez
e41dc3dae2 WIP 2022-09-13 18:16:31 +02:00
J. Fernando Sánchez
bbaed636a8 WIP 2022-07-19 17:18:02 +02:00
J. Fernando Sánchez
6f7481769e WIP 2022-07-19 17:17:23 +02:00
J. Fernando Sánchez
1a8313e4f6 WIP 2022-07-19 17:12:41 +02:00
J. Fernando Sánchez
a40aa55b6a Release 0.20.7 2022-07-06 09:23:46 +02:00
40 changed files with 1583 additions and 1007 deletions

View File

@@ -4,6 +4,17 @@ All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## [UNRELEASED]
### Changed
* Configuration schema is very different now. Check `soil.config` for more information. We are using Pydantic for (de)serialization.
* There may be more than one topology/network in the simulation
* Agents are split into groups now. Each group may be assigned a given set of agents or an agent distribution, and a network topology to be assigned to.
### Removed
* Any `tsih` and `History` integration in the main classes. To record the state of environments/agents, just use a datacollector. In some cases this may be slower or consume more memory than the previous system. However, few cases actually used the full potential of the history, and it came at the cost of unnecessary complexity and worse performance for the majority of cases.
## [0.20.7]
### Changed
* Creating a `time.When` from another `time.When` does not nest them anymore (it returns the argument)
### Fixed
* Bug with time.NEVER/time.INFINITY
## [0.20.6]
### Fixed
* Agents now return `time.INFINITY` when dead, instead of 'inf'

View File

@@ -39,6 +39,7 @@ As of this writing,
This is a non-exhaustive list of tasks to achieve compatibility:
* Environments.agents and mesa.Agent.agents are not the same. env is a property, and it only takes into account network and environment agents. Might rename environment_agents to other_agents or sth like that
- [ ] Integrate `soil.Simulation` with mesa's runners:
- [ ] `soil.Simulation` could mimic/become a `mesa.batchrunner`
- [ ] Integrate `soil.Environment` with `mesa.Model`:

View File

@@ -88,9 +88,18 @@ For example, the following configuration is equivalent to :code:`nx.complete_gra
Environment
============
The environment is the place where the shared state of the simulation is stored.
For instance, the probability of disease outbreak.
The configuration file may specify the initial value of the environment parameters:
That means both global parameters, such as the probability of disease outbreak.
But it also means other data, such as a map, or a network topology that connects multiple agents.
As a result, it is also typical to add custom functions in an environment that help agents interact with each other and with the state of the simulation.
Last but not least, an environment controls when and how its agents will be executed.
By default, soil environments incorporate a ``soil.time.TimedActivation`` model for agent execution (more on this on the following section).
Soil environments are very similar, and often interchangeable with, mesa models (``mesa.Model``).
A configuration may specify the initial value of the environment parameters:
.. code:: yaml
@@ -98,23 +107,33 @@ The configuration file may specify the initial value of the environment paramete
daily_probability_of_earthquake: 0.001
number_of_earthquakes: 0
All agents have access to the environment parameters.
All agents have access to the environment (and its parameters).
In some scenarios, it is useful to have a custom environment, to provide additional methods or to control the way agents update environment state.
For example, if our agents play the lottery, the environment could provide a method to decide whether the agent wins, instead of leaving it to the agent.
Agents
======
Agents are a way of modelling behavior.
Agents can be characterized with two variables: agent type (``agent_type``) and state.
Only one agent is executed at a time (generally, every ``interval`` seconds), and it has access to its state and the environment parameters.
The agent type is a ``soil.Agent`` class, which contains the code that encapsulates the behavior of the agent.
The state is a set of variables, which may change during the simulation, and that the code may use to control the behavior.
All agents provide a ``step`` method either explicitly or implicitly (by inheriting it from a superclass), which controls how the agent will behave in each step of the simulation.
When and how agent steps are executed in a simulation depends entirely on the ``environment``.
Most environments will internally use a scheduler (``mesa.time.BaseScheduler``), which controls the activation of agents.
In soil, we generally used the ``soil.time.TimedActivation`` scheduler, which allows agents to specify when their next activation will happen, defaulting to a
When an agent's step is executed (generally, every ``interval`` seconds), the agent has access to its state and the environment.
Through the environment, it can access the network topology and the state of other agents.
There are three three types of agents according to how they are added to the simulation: network agents and environment agent.
There are two types of agents according to how they are added to the simulation: network agents and environment agent.
Network Agents
##############
Network agents are attached to a node in the topology.
The configuration file allows you to specify how agents will be mapped to topology nodes.
@@ -125,7 +144,9 @@ Hence, every node in the network will be associated to an agent of that type.
agent_type: SISaModel
It is also possible to add more than one type of agent to the simulation, and to control the ratio of each type (using the ``weight`` property).
It is also possible to add more than one type of agent to the simulation.
To control the ratio of each type (using the ``weight`` property).
For instance, with following configuration, it is five times more likely for a node to be assigned a CounterModel type than a SISaModel type.
.. code:: yaml

View File

@@ -1 +1 @@
ipython==7.31.1
ipython>=7.31.1

View File

@@ -1,27 +1,65 @@
---
name: simple
group: tests
dir_path: "/tmp/"
num_trials: 3
max_time: 100
interval: 1
seed: "CompleteSeed!"
network_params:
generator: complete_graph
n: 10
network_agents:
- agent_type: CounterModel
weight: 1
version: '2'
general:
id: simple
group: tests
dir_path: "/tmp/"
num_trials: 3
max_time: 100
interval: 1
seed: "CompleteSeed!"
topologies:
default:
params:
generator: complete_graph
n: 10
another_graph:
params:
generator: complete_graph
n: 2
environment:
environment_class: Environment
params:
am_i_complete: true
agents:
# Agents are split several groups, each with its own definition
default: # This is a special group. Its values will be used as default values for the rest of the groups
agent_class: CounterModel
topology: default
state:
state_id: 0
- agent_type: AggregatedCounter
weight: 0.2
environment_agents: []
environment_class: Environment
environment_params:
am_i_complete: true
default_state:
incidents: 0
states:
- name: 'The first node'
- name: 'The second node'
times: 1
environment:
# In this group we are not specifying any topology
fixed:
- name: 'Environment Agent 1'
agent_class: CounterModel
state:
times: 10
general_counters:
topology: default
distribution:
- agent_class: CounterModel
weight: 1
state:
times: 3
- agent_class: AggregatedCounter
weight: 0.2
override:
- filter:
agent_class: AggregatedCounter
n: 2
state:
times: 5
other_counters:
topology: another_graph
fixed:
- agent_class: CounterModel
id: 0
state:
times: 1
total: 0
- agent_class: CounterModel
id: 1
# If not specified, it will use the state set in the default
# state:

View File

@@ -14,7 +14,6 @@ network_agents:
weight: 1
environment_class: social_wealth.MoneyEnv
environment_params:
num_mesa_agents: 5
mesa_agent_type: social_wealth.MoneyAgent
N: 10
width: 50

View File

@@ -71,10 +71,9 @@ class SocialMoneyAgent(NetworkAgent, MoneyAgent):
class MoneyEnv(Environment):
"""A model with some number of agents."""
def __init__(self, N, width, height, *args, network_params, **kwargs):
def __init__(self, width, height, *args, topologies, **kwargs):
network_params['n'] = N
super().__init__(*args, network_params=network_params, **kwargs)
super().__init__(*args, topologies=topologies, **kwargs)
self.grid = MultiGrid(width, height, False)
# Create agents

View File

@@ -1,6 +1,5 @@
---
default_state: {}
load_module: newsspread
environment_agents: []
environment_params:
prob_neighbor_spread: 0.0
@@ -9,11 +8,11 @@ interval: 1
max_time: 300
name: Sim_all_dumb
network_agents:
- agent_type: DumbViewer
- agent_type: newsspread.DumbViewer
state:
has_tv: false
weight: 1
- agent_type: DumbViewer
- agent_type: newsspread.DumbViewer
state:
has_tv: true
weight: 1
@@ -24,7 +23,6 @@ network_params:
num_trials: 50
---
default_state: {}
load_module: newsspread
environment_agents: []
environment_params:
prob_neighbor_spread: 0.0
@@ -33,19 +31,19 @@ interval: 1
max_time: 300
name: Sim_half_herd
network_agents:
- agent_type: DumbViewer
- agent_type: newsspread.DumbViewer
state:
has_tv: false
weight: 1
- agent_type: DumbViewer
- agent_type: newsspread.DumbViewer
state:
has_tv: true
weight: 1
- agent_type: HerdViewer
- agent_type: newsspread.HerdViewer
state:
has_tv: false
weight: 1
- agent_type: HerdViewer
- agent_type: newsspread.HerdViewer
state:
has_tv: true
weight: 1
@@ -56,7 +54,6 @@ network_params:
num_trials: 50
---
default_state: {}
load_module: newsspread
environment_agents: []
environment_params:
prob_neighbor_spread: 0.0
@@ -65,12 +62,12 @@ interval: 1
max_time: 300
name: Sim_all_herd
network_agents:
- agent_type: HerdViewer
- agent_type: newsspread.HerdViewer
state:
has_tv: true
state_id: neutral
weight: 1
- agent_type: HerdViewer
- agent_type: newsspread.HerdViewer
state:
has_tv: true
state_id: neutral
@@ -82,7 +79,6 @@ network_params:
num_trials: 50
---
default_state: {}
load_module: newsspread
environment_agents: []
environment_params:
prob_neighbor_spread: 0.0
@@ -92,12 +88,12 @@ interval: 1
max_time: 300
name: Sim_wise_herd
network_agents:
- agent_type: HerdViewer
- agent_type: newsspread.HerdViewer
state:
has_tv: true
state_id: neutral
weight: 1
- agent_type: WiseViewer
- agent_type: newsspread.WiseViewer
state:
has_tv: true
weight: 1
@@ -108,7 +104,6 @@ network_params:
num_trials: 50
---
default_state: {}
load_module: newsspread
environment_agents: []
environment_params:
prob_neighbor_spread: 0.0
@@ -118,12 +113,12 @@ interval: 1
max_time: 300
name: Sim_all_wise
network_agents:
- agent_type: WiseViewer
- agent_type: newsspread.WiseViewer
state:
has_tv: true
state_id: neutral
weight: 1
- agent_type: WiseViewer
- agent_type: newsspread.WiseViewer
state:
has_tv: true
weight: 1

View File

@@ -1,8 +1,8 @@
from soil.agents import FSM, state, default_state, prob
from soil.agents import FSM, NetworkAgent, state, default_state, prob
import logging
class DumbViewer(FSM):
class DumbViewer(FSM, NetworkAgent):
'''
A viewer that gets infected via TV (if it has one) and tries to infect
its neighbors once it's infected.

View File

@@ -1,4 +1,4 @@
from soil.agents import FSM, state, default_state
from soil.agents import FSM, NetworkAgent, state, default_state
from soil import Environment
from random import random, shuffle
from itertools import islice
@@ -53,7 +53,7 @@ class CityPubs(Environment):
pub['occupancy'] -= 1
class Patron(FSM):
class Patron(FSM, NetworkAgent):
'''Agent that looks for friends to drink with. It will do three things:
1) Look for other patrons to drink with
2) Look for a bar where the agent and other agents in the same group can get in.
@@ -151,7 +151,7 @@ class Patron(FSM):
return befriended
class Police(FSM):
class Police(FSM, NetworkAgent):
'''Simple agent to take drunk people out of pubs.'''
level = logging.INFO

View File

@@ -10,7 +10,7 @@ class Genders(Enum):
female = 'female'
class RabbitModel(FSM):
class RabbitModel(FSM, NetworkAgent):
defaults = {
'age': 0,
@@ -110,12 +110,12 @@ class Female(RabbitModel):
self.info('A mother has died carrying a baby!!')
class RandomAccident(NetworkAgent):
class RandomAccident(BaseAgent):
level = logging.DEBUG
def step(self):
rabbits_total = self.topology.number_of_nodes()
rabbits_total = self.env.topology.number_of_nodes()
if 'rabbits_alive' not in self.env:
self.env['rabbits_alive'] = 0
rabbits_alive = self.env.get('rabbits_alive', rabbits_total)
@@ -131,5 +131,5 @@ class RandomAccident(NetworkAgent):
self.log('Rabbits alive: {}'.format(self.env['rabbits_alive']))
i.set_state(i.dead)
self.log('Rabbits alive: {}/{}'.format(rabbits_alive, rabbits_total))
if self.count_agents(state_id=RabbitModel.dead.id) == self.topology.number_of_nodes():
if self.env.count_agents(state_id=RabbitModel.dead.id) == self.env.topology.number_of_nodes():
self.die()

View File

@@ -1,5 +1,4 @@
---
load_module: rabbit_agents
name: rabbits_example
max_time: 100
interval: 1

View File

@@ -1,5 +1,4 @@
name: TerroristNetworkModel_sim
load_module: TerroristNetworkModel
max_time: 150
num_trials: 1
network_params:
@@ -9,19 +8,19 @@ network_params:
# theta: 20
n: 100
network_agents:
- agent_type: TerroristNetworkModel
- agent_type: TerroristNetworkModel.TerroristNetworkModel
weight: 0.8
state:
id: civilian # Civilians
- agent_type: TerroristNetworkModel
- agent_type: TerroristNetworkModel.TerroristNetworkModel
weight: 0.1
state:
id: leader # Leaders
- agent_type: TrainingAreaModel
- agent_type: TerroristNetworkModel.TrainingAreaModel
weight: 0.05
state:
id: terrorist # Terrorism
- agent_type: HavenModel
- agent_type: TerroristNetworkModel.HavenModel
weight: 0.05
state:
id: civilian # Civilian

View File

@@ -5,5 +5,5 @@ pyyaml>=5.1
pandas>=0.23
SALib>=1.3
Jinja2
Mesa>=0.8
tsih>=0.1.5
Mesa>=0.8.9
pydantic>=1.9

View File

@@ -49,6 +49,7 @@ setup(
extras_require=extras_require,
tests_require=test_reqs,
setup_requires=['pytest-runner', ],
pytest_plugins = ['pytest_profiling'],
include_package_data=True,
entry_points={
'console_scripts':

View File

@@ -1 +1 @@
0.20.5
0.20.7

View File

@@ -36,13 +36,13 @@ def main():
parser.add_argument('--module', '-m', type=str,
help='file containing the code of any custom agents.')
parser.add_argument('--dry-run', '--dry', action='store_true',
help='Do not store the results of the simulation.')
help='Do not store the results of the simulation to disk, show in terminal instead.')
parser.add_argument('--pdb', action='store_true',
help='Use a pdb console in case of exception.')
parser.add_argument('--graph', '-g', action='store_true',
help='Dump GEXF graph. Defaults to false.')
help='Dump each trial\'s network topology as a GEXF graph. Defaults to false.')
parser.add_argument('--csv', action='store_true',
help='Dump history in CSV format. Defaults to false.')
help='Dump all data collected in CSV format. Defaults to false.')
parser.add_argument('--level', type=str,
help='Logging level')
parser.add_argument('--output', '-o', type=str, default="soil_output",

View File

@@ -7,9 +7,15 @@ class CounterModel(NetworkAgent):
in each step and adds it to its state.
"""
defaults = {
'times': 0,
'neighbors': 0,
'total': 0
}
def step(self):
# Outside effects
total = len(list(self.get_agents()))
total = len(list(self.env.agents))
neighbors = len(list(self.get_neighboring_agents()))
self['times'] = self.get('times', 0) + 1
self['neighbors'] = neighbors
@@ -33,6 +39,6 @@ class AggregatedCounter(NetworkAgent):
self['times'] += 1
neighbors = len(list(self.get_neighboring_agents()))
self['neighbors'] += neighbors
total = len(list(self.get_agents()))
total = len(list(self.env.agents))
self['total'] += total
self.debug('Running for step: {}. Total: {}'.format(self.now, total))

View File

@@ -1,16 +1,20 @@
import logging
from collections import OrderedDict, defaultdict
from collections.abc import MutableMapping, Mapping, Set
from abc import ABCMeta
from copy import deepcopy
from functools import partial, wraps
from itertools import islice
from itertools import islice, chain
import json
import networkx as nx
from .. import serialization, utils, time
from mesa import Agent as MesaAgent
from typing import Dict, List
from tsih import Key
from random import shuffle
from .. import serialization, utils, time, config
from mesa import Agent
def as_node(agent):
@@ -24,9 +28,16 @@ IGNORED_FIELDS = ('model', 'logger')
class DeadAgent(Exception):
pass
class BaseAgent(Agent):
class BaseAgent(MesaAgent, MutableMapping):
"""
A special Agent that keeps track of its state history.
A special type of Mesa Agent that:
* Can be used as a dictionary to access its state.
* Has logging built-in
* Can be given default arguments through a defaults class attribute,
which will be used on construction to initialize each agent's state
Any attribute that is not preceded by an underscore (`_`) will also be added to its state.
"""
defaults = {}
@@ -40,17 +51,19 @@ class BaseAgent(Agent):
):
# Check for REQUIRED arguments
# Initialize agent parameters
if isinstance(unique_id, Agent):
if isinstance(unique_id, MesaAgent):
raise Exception()
self._saved = set()
assert isinstance(unique_id, int)
super().__init__(unique_id=unique_id, model=model)
self.name = name or '{}[{}]'.format(type(self).__name__, self.unique_id)
self.name = str(name) if name else'{}[{}]'.format(type(self).__name__, self.unique_id)
self._neighbors = None
self.alive = True
self.interval = interval or self.get('interval', 1)
self.logger = logging.getLogger(self.model.name).getChild(self.name)
self.logger = logging.getLogger(self.model.id).getChild(self.name)
if hasattr(self, 'level'):
self.logger.setLevel(self.level)
@@ -59,8 +72,15 @@ class BaseAgent(Agent):
setattr(self, k, deepcopy(v))
for (k, v) in kwargs.items():
setattr(self, k, v)
for (k, v) in getattr(self, 'defaults', {}).items():
if not hasattr(self, k) or getattr(self, k) is None:
setattr(self, k, v)
def __hash__(self):
return hash(self.unique_id)
# TODO: refactor to clean up mesa compatibility
@property
@@ -79,7 +99,6 @@ class BaseAgent(Agent):
def state(self):
'''
Return the agent itself, which behaves as a dictionary.
Changes made to `agent.state` will be reflected in the history.
This method shouldn't be used, but is kept here for backwards compatibility.
'''
@@ -98,23 +117,7 @@ class BaseAgent(Agent):
def environment_params(self, value):
self.model.environment_params = value
def __setattr__(self, key, value):
if not key.startswith('_') and key not in IGNORED_FIELDS:
try:
k = Key(t_step=self.now,
dict_id=self.unique_id,
key=key)
self._saved.add(key)
self.model[k] = value
except AttributeError:
pass
super().__setattr__(key, value)
def __getitem__(self, key):
if isinstance(key, tuple):
key, t_step = key
k = Key(key=key, t_step=t_step, dict_id=self.unique_id)
return self.model[k]
return getattr(self, key)
def __delitem__(self, key):
@@ -126,8 +129,17 @@ class BaseAgent(Agent):
def __setitem__(self, key, value):
setattr(self, key, value)
def __len__(self):
return sum(1 for n in self.keys())
def __iter__(self):
return self.items()
def keys(self):
return (k for k in self.__dict__ if k[0] != '_')
def items(self):
return ((k, getattr(self, k)) for k in self._saved)
return ((k, v) for (k, v) in self.__dict__.items() if k[0] != '_')
def get(self, key, default=None):
return self[key] if key in self else default
@@ -145,7 +157,7 @@ class BaseAgent(Agent):
self.alive = False
if remove:
self.remove_node(self.id)
return time.INFINITY
return time.NEVER
def step(self):
if not self.alive:
@@ -165,22 +177,30 @@ class BaseAgent(Agent):
extra['agent_name'] = self.name
return self.logger.log(level, message, extra=extra)
def debug(self, *args, **kwargs):
return self.log(*args, level=logging.DEBUG, **kwargs)
def info(self, *args, **kwargs):
return self.log(*args, level=logging.INFO, **kwargs)
# Alias
# Agent = BaseAgent
class NetworkAgent(BaseAgent):
@property
def topology(self):
return self.model.G
return self.env.topology_for(self.unique_id)
@property
def node_id(self):
return self.env.node_id_for(self.unique_id)
@property
def G(self):
return self.model.G
return self.model.topologies[self._topology]
def count_agents(self, **kwargs):
return len(list(self.get_agents(**kwargs)))
@@ -197,16 +217,19 @@ class NetworkAgent(BaseAgent):
it = islice(it, limit)
return list(it)
def iter_agents(self, agents=None, limit_neighbors=False, **kwargs):
def iter_agents(self, unique_id=None, limit_neighbors=False, **kwargs):
if limit_neighbors:
agents = self.topology.neighbors(self.unique_id)
unique_id = [self.topology.nodes[node]['agent_id'] for node in self.topology.neighbors(self.node_id)]
if not unique_id:
return
yield from self.model.agents(unique_id=unique_id, **kwargs)
agents = self.model.get_agents(agents)
return select(agents, **kwargs)
def subgraph(self, center=True, **kwargs):
include = [self] if center else []
return self.topology.subgraph(n.unique_id for n in list(self.get_agents(**kwargs))+include)
G = self.topology.subgraph(n.node_id for n in list(self.get_agents(**kwargs)+include))
return G
def remove_node(self, unique_id):
self.topology.remove_node(unique_id)
@@ -220,7 +243,6 @@ class NetworkAgent(BaseAgent):
self.topology.add_edge(self.unique_id, other.unique_id, edge_attr_dict=edge_attr_dict, *edge_attrs)
def ego_search(self, steps=1, center=False, node=None, **kwargs):
'''Get a list of nodes in the ego network of *node* of radius *steps*'''
node = as_node(node if node is not None else self)
@@ -279,7 +301,7 @@ def default_state(func):
return func
class MetaFSM(type):
class MetaFSM(ABCMeta):
def __init__(cls, name, bases, nmspc):
super(MetaFSM, cls).__init__(name, bases, nmspc)
states = {}
@@ -302,7 +324,7 @@ class MetaFSM(type):
cls.states = states
class FSM(NetworkAgent, metaclass=MetaFSM):
class FSM(BaseAgent, metaclass=MetaFSM):
def __init__(self, *args, **kwargs):
super(FSM, self).__init__(*args, **kwargs)
if not hasattr(self, 'state_id'):
@@ -350,7 +372,7 @@ def prob(prob=1):
def calculate_distribution(network_agents=None,
agent_type=None):
agent_class=None):
'''
Calculate the threshold values (thresholds for a uniform distribution)
of an agent distribution given the weights of each agent type.
@@ -358,13 +380,13 @@ def calculate_distribution(network_agents=None,
The input has this form: ::
[
{'agent_type': 'agent_type_1',
{'agent_class': 'agent_class_1',
'weight': 0.2,
'state': {
'id': 0
}
},
{'agent_type': 'agent_type_2',
{'agent_class': 'agent_class_2',
'weight': 0.8,
'state': {
'id': 1
@@ -373,12 +395,12 @@ def calculate_distribution(network_agents=None,
]
In this example, 20% of the nodes will be marked as type
'agent_type_1'.
'agent_class_1'.
'''
if network_agents:
network_agents = [deepcopy(agent) for agent in network_agents if not hasattr(agent, 'id')]
elif agent_type:
network_agents = [{'agent_type': agent_type}]
elif agent_class:
network_agents = [{'agent_class': agent_class}]
else:
raise ValueError('Specify a distribution or a default agent type')
@@ -398,11 +420,11 @@ def calculate_distribution(network_agents=None,
return network_agents
def serialize_type(agent_type, known_modules=[], **kwargs):
if isinstance(agent_type, str):
return agent_type
def serialize_type(agent_class, known_modules=[], **kwargs):
if isinstance(agent_class, str):
return agent_class
known_modules += ['soil.agents']
return serialization.serialize(agent_type, known_modules=known_modules, **kwargs)[1] # Get the name of the class
return serialization.serialize(agent_class, known_modules=known_modules, **kwargs)[1] # Get the name of the class
def serialize_definition(network_agents, known_modules=[]):
@@ -414,23 +436,23 @@ def serialize_definition(network_agents, known_modules=[]):
for v in d:
if 'threshold' in v:
del v['threshold']
v['agent_type'] = serialize_type(v['agent_type'],
v['agent_class'] = serialize_type(v['agent_class'],
known_modules=known_modules)
return d
def deserialize_type(agent_type, known_modules=[]):
if not isinstance(agent_type, str):
return agent_type
def deserialize_type(agent_class, known_modules=[]):
if not isinstance(agent_class, str):
return agent_class
known = known_modules + ['soil.agents', 'soil.agents.custom' ]
agent_type = serialization.deserializer(agent_type, known_modules=known)
return agent_type
agent_class = serialization.deserializer(agent_class, known_modules=known)
return agent_class
def deserialize_definition(ind, **kwargs):
d = deepcopy(ind)
for v in d:
v['agent_type'] = deserialize_type(v['agent_type'], **kwargs)
v['agent_class'] = deserialize_type(v['agent_class'], **kwargs)
return d
@@ -445,7 +467,7 @@ def _validate_states(states, topology):
return states
def _convert_agent_types(ind, to_string=False, **kwargs):
def _convert_agent_classs(ind, to_string=False, **kwargs):
'''Convenience method to allow specifying agents by class or class name.'''
if to_string:
return serialize_definition(ind, **kwargs)
@@ -464,7 +486,7 @@ def _agent_from_definition(definition, value=-1, unique_id=None):
state = {}
if 'state' in d:
state = deepcopy(d['state'])
return d['agent_type'], state
return d['agent_class'], state
raise Exception('Definition for value {} not found in: {}'.format(value, definition))
@@ -481,14 +503,15 @@ def _definition_to_dict(definition, size=None, default_state=None):
distro = sorted([item for item in definition if 'weight' in item])
ix = 0
id = 0
def init_agent(item, id=ix):
while id in agents:
id += 1
agent = remaining[id]
agent['state'].update(copy(item.get('state', {})))
agents[id] = agent
agents[agent.unique_id] = agent
del remaining[id]
return agent
@@ -528,32 +551,237 @@ def _definition_to_dict(definition, size=None, default_state=None):
return agents
def select(agents, state_id=None, agent_type=None, ignore=None, iterator=False, **kwargs):
class AgentView(Mapping, Set):
"""A lazy-loaded list of agents.
"""
__slots__ = ("_agents",)
def __init__(self, agents):
self._agents = agents
def __getstate__(self):
return {"_agents": self._agents}
def __setstate__(self, state):
self._agents = state["_agents"]
# Mapping methods
def __len__(self):
return sum(len(x) for x in self._agents.values())
def __iter__(self):
yield from iter(chain.from_iterable(g.values() for g in self._agents.values()))
def __getitem__(self, agent_id):
if isinstance(agent_id, slice):
raise ValueError(f"Slicing is not supported")
for group in self._agents.values():
if agent_id in group:
return group[agent_id]
raise ValueError(f"Agent {agent_id} not found")
def filter(self, *args, **kwargs):
yield from filter_groups(self._agents, *args, **kwargs)
def one(self, *args, **kwargs):
return next(filter_groups(self._agents, *args, **kwargs))
def __call__(self, *args, **kwargs):
return list(self.filter(*args, **kwargs))
def __contains__(self, agent_id):
return any(agent_id in g for g in self._agents)
def __str__(self):
return str(list(a.unique_id for a in self))
def __repr__(self):
return f"{self.__class__.__name__}({self})"
def filter_groups(groups, *, group=None, **kwargs):
assert isinstance(groups, dict)
if group is not None and not isinstance(group, list):
group = [group]
if group:
groups = list(groups[g] for g in group if g in groups)
else:
groups = list(groups.values())
agents = chain.from_iterable(filter_group(g, **kwargs) for g in groups)
yield from agents
def filter_group(group, *id_args, unique_id=None, state_id=None, agent_class=None, ignore=None, state=None, **kwargs):
'''
Filter agents given as a dict, by the criteria given as arguments (e.g., certain type or state id).
'''
assert isinstance(group, dict)
ids = []
if unique_id is not None:
if isinstance(unique_id, list):
ids += unique_id
else:
ids.append(unique_id)
if id_args:
ids += id_args
if state_id is not None and not isinstance(state_id, (tuple, list)):
state_id = tuple([state_id])
if agent_type is not None:
if agent_class is not None:
agent_class = deserialize_type(agent_class)
try:
agent_type = tuple(agent_type)
agent_class = tuple(agent_class)
except TypeError:
agent_type = tuple([agent_type])
agent_class = tuple([agent_class])
if ids:
agents = (group[aid] for aid in ids if aid in group)
else:
agents = (a for a in group.values())
f = agents
if ignore:
f = filter(lambda x: x not in ignore, f)
if state_id is not None:
f = filter(lambda agent: agent.get('state_id', None) in state_id, f)
if agent_type is not None:
f = filter(lambda agent: isinstance(agent, agent_type), f)
for k, v in kwargs.items():
if agent_class is not None:
f = filter(lambda agent: isinstance(agent, agent_class), f)
state = state or dict()
state.update(kwargs)
for k, v in state.items():
f = filter(lambda agent: agent.state.get(k, None) == v, f)
if iterator:
return f
return f
yield from f
def from_config(cfg: Dict[str, config.AgentConfig], env):
'''
Agents are specified in groups.
Each group can be specified in two ways, either through a fixed list in which each item has
has the agent type, number of agents to create, and the other parameters, or through what we call
an `agent distribution`, which is similar but instead of number of agents, it specifies the weight
of each agent type.
'''
default = cfg.get('default', None)
return {k: _group_from_config(c, default=default, env=env) for (k, c) in cfg.items() if k is not 'default'}
def _group_from_config(cfg: config.AgentConfig, default: config.SingleAgentConfig, env):
agents = {}
if cfg.fixed is not None:
agents = _from_fixed(cfg.fixed, topology=cfg.topology, default=default, env=env)
if cfg.distribution:
n = cfg.n or len(env.topologies[cfg.topology or default.topology])
target = n - len(agents)
agents.update(_from_distro(cfg.distribution, target,
topology=cfg.topology or default.topology,
default=default,
env=env))
assert len(agents) == n
if cfg.override:
for attrs in cfg.override:
if attrs.filter:
filtered = list(filter_group(agents, **attrs.filter))
else:
filtered = list(agents)
if attrs.n > len(filtered):
raise ValueError(f'Not enough agents to sample. Got {len(filtered)}, expected >= {attrs.n}')
for agent in random.sample(filtered, attrs.n):
agent.state.update(attrs.state)
return agents
def _from_fixed(lst: List[config.FixedAgentConfig], topology: str, default: config.SingleAgentConfig, env):
agents = {}
for fixed in lst:
agent_id = fixed.agent_id
if agent_id is None:
agent_id = env.next_id()
cls = serialization.deserialize(fixed.agent_class or default.agent_class)
state = fixed.state.copy()
state.update(default.state)
agent = cls(unique_id=agent_id,
model=env,
**state)
topology = fixed.topology if (fixed.topology is not None) else (topology or default.topology)
if topology:
env.agent_to_node(agent_id, topology, fixed.node_id)
agents[agent.unique_id] = agent
return agents
def _from_distro(distro: List[config.AgentDistro],
n: int,
topology: str,
default: config.SingleAgentConfig,
env):
agents = {}
if n is None:
if any(lambda dist: dist.n is None, distro):
raise ValueError('You must provide a total number of agents, or the number of each type')
n = sum(dist.n for dist in distro)
weights = list(dist.weight if dist.weight is not None else 1 for dist in distro)
minw = min(weights)
norm = list(weight / minw for weight in weights)
total = sum(norm)
chunk = n // total
# random.choices would be enough to get a weighted distribution. But it can vary a lot for smaller k
# So instead we calculate our own distribution to make sure the actual ratios are close to what we would expect
# Calculate how many times each has to appear
indices = list(chain.from_iterable([idx] * int(n*chunk) for (idx, n) in enumerate(norm)))
# Complete with random agents following the original weight distribution
if len(indices) < n:
indices += random.choices(list(range(len(distro))), weights=[d.weight for d in distro], k=n-len(indices))
# Deserialize classes for efficiency
classes = list(serialization.deserialize(i.agent_class or default.agent_class) for i in distro)
# Add them in random order
random.shuffle(indices)
for idx in indices:
d = distro[idx]
cls = classes[idx]
agent_id = env.next_id()
state = d.state.copy()
if default:
state.update(default.state)
agent = cls(unique_id=agent_id, model=env, **state)
topology = d.topology if (d.topology is not None) else topology or default.topology
if topology:
env.agent_to_node(agent.unique_id, topology)
assert agent.name is not None
assert agent.name != 'None'
assert agent.name
agents[agent.unique_id] = agent
return agents
from .BassModel import *

242
soil/config.py Normal file
View File

@@ -0,0 +1,242 @@
from __future__ import annotations
from pydantic import BaseModel, ValidationError, validator, root_validator
import yaml
import os
import sys
from typing import Any, Callable, Dict, List, Optional, Union, Type
from pydantic import BaseModel, Extra
import networkx as nx
class General(BaseModel):
id: str = 'Unnamed Simulation'
group: str = None
dir_path: Optional[str] = None
num_trials: int = 1
max_time: float = 100
interval: float = 1
seed: str = ""
@staticmethod
def default():
return General()
# Could use TypeAlias in python >= 3.10
nodeId = int
class Node(BaseModel):
id: nodeId
state: Optional[Dict[str, Any]] = {}
class Edge(BaseModel):
source: nodeId
target: nodeId
value: Optional[float] = 1
class Topology(BaseModel):
nodes: List[Node]
directed: bool
links: List[Edge]
class NetParams(BaseModel, extra=Extra.allow):
generator: Union[Callable, str]
n: int
class NetConfig(BaseModel):
group: str = 'network'
params: Optional[NetParams]
topology: Optional[Union[Topology, nx.Graph]]
path: Optional[str]
class Config:
arbitrary_types_allowed = True
@staticmethod
def default():
return NetConfig(topology=None, params=None)
@root_validator
def validate_all(cls, values):
if 'params' not in values and 'topology' not in values:
raise ValueError('You must specify either a topology or the parameters to generate a graph')
return values
class EnvConfig(BaseModel):
environment_class: Union[Type, str] = 'soil.Environment'
params: Dict[str, Any] = {}
schedule: Union[Type, str] = 'soil.time.TimedActivation'
@staticmethod
def default():
return EnvConfig()
class SingleAgentConfig(BaseModel):
agent_class: Optional[Union[Type, str]] = None
agent_id: Optional[int] = None
topology: Optional[str] = None
node_id: Optional[Union[int, str]] = None
name: Optional[str] = None
state: Optional[Dict[str, Any]] = {}
class FixedAgentConfig(SingleAgentConfig):
n: Optional[int] = 1
@root_validator
def validate_all(cls, values):
if values.get('agent_id', None) is not None and values.get('n', 1) > 1:
print(values)
raise ValueError(f"An agent_id can only be provided when there is only one agent ({values.get('n')} given)")
return values
class OverrideAgentConfig(FixedAgentConfig):
filter: Optional[Dict[str, Any]] = None
class AgentDistro(SingleAgentConfig):
weight: Optional[float] = 1
class AgentConfig(SingleAgentConfig):
n: Optional[int] = None
topology: Optional[str] = None
distribution: Optional[List[AgentDistro]] = None
fixed: Optional[List[FixedAgentConfig]] = None
override: Optional[List[OverrideAgentConfig]] = None
@staticmethod
def default():
return AgentConfig()
@root_validator
def validate_all(cls, values):
if 'distribution' in values and ('n' not in values and 'topology' not in values):
raise ValueError("You need to provide the number of agents or a topology to extract the value from.")
return values
class Config(BaseModel, extra=Extra.forbid):
version: Optional[str] = '1'
general: General = General.default()
topologies: Optional[Dict[str, NetConfig]] = {}
environment: EnvConfig = EnvConfig.default()
agents: Optional[Dict[str, AgentConfig]] = {}
def convert_old(old, strict=True):
'''
Try to convert old style configs into the new format.
This is still a work in progress and might not work in many cases.
'''
new = {}
general = {}
for k in ['id',
'group',
'dir_path',
'num_trials',
'max_time',
'interval',
'seed']:
if k in old:
general[k] = old[k]
if 'name' in old:
general['id'] = old['name']
network = {}
if 'network_params' in old and old['network_params']:
for (k, v) in old['network_params'].items():
if k == 'path':
network['path'] = v
else:
network.setdefault('params', {})[k] = v
if 'topology' in old:
network['topology'] = old['topology']
agents = {
'network': {},
'default': {},
}
if 'agent_type' in old:
agents['default']['agent_class'] = old['agent_type']
if 'default_state' in old:
agents['default']['state'] = old['default_state']
def updated_agent(agent):
newagent = dict(agent)
newagent['agent_class'] = newagent['agent_type']
del newagent['agent_type']
return newagent
for agent in old.get('environment_agents', []):
agents['environment'] = {'distribution': [], 'fixed': []}
if 'agent_id' in agent:
agent['name'] = agent['agent_id']
del agent['agent_id']
agents['environment']['fixed'].append(updated_agent(agent))
by_weight = []
fixed = []
override = []
if 'network_agents' in old:
agents['network']['topology'] = 'default'
for agent in old['network_agents']:
agent = updated_agent(agent)
if 'agent_id' in agent:
fixed.append(agent)
else:
by_weight.append(agent)
if 'agent_type' in old and (not fixed and not by_weight):
agents['network']['topology'] = 'default'
by_weight = [{'agent_class': old['agent_type']}]
# TODO: translate states properly
if 'states' in old:
states = old['states']
if isinstance(states, dict):
states = states.items()
else:
states = enumerate(states)
for (k, v) in states:
override.append({'filter': {'node_id': k},
'state': v
})
agents['network']['override'] = override
agents['network']['fixed'] = fixed
agents['network']['distribution'] = by_weight
environment = {'params': {}}
if 'environment_class' in old:
environment['environment_class'] = old['environment_class']
for (k, v) in old.get('environment_params', {}).items():
environment['params'][k] = v
return Config(version='2',
general=general,
topologies={'default': network},
environment=environment,
agents=agents)

View File

@@ -8,19 +8,17 @@ class SoilDataCollector(MDC):
# Populate model and env reporters so they have a key per
# So they can be shown in the web interface
self.environment = environment
raise NotImplementedError()
@property
def model_vars(self):
pass
raise NotImplementedError()
@model_vars.setter
def model_vars(self, value):
pass
raise NotImplementedError()
@property
def agent_reporters(self):
self.model._history._
pass
raise NotImplementedError()

View File

@@ -1,30 +1,27 @@
from __future__ import annotations
import os
import sqlite3
import csv
import math
import random
import yaml
import tempfile
import logging
import pandas as pd
from typing import Dict
from collections import namedtuple
from time import time as current_time
from copy import deepcopy
from networkx.readwrite import json_graph
import networkx as nx
from tsih import History, Record, Key, NoHistory
from mesa import Model
from mesa.datacollection import DataCollector
from . import serialization, agents, analysis, utils, time
from . import serialization, agents, analysis, utils, time, config, network
Record = namedtuple('Record', 'dict_id t_step key value')
# These properties will be copied when pickling/unpickling the environment
_CONFIG_PROPS = [ 'name',
'states',
'default_state',
'interval',
]
class Environment(Model):
"""
@@ -33,76 +30,77 @@ class Environment(Model):
params, which are used as shared state between agents.
The environment parameters and the state of every agent can be accessed
both by using the environment as a dictionary or with the environment's
both by using the environment as a dictionary or with the environment's
:meth:`soil.environment.Environment.get` method.
"""
def __init__(self, name=None,
network_agents=None,
environment_agents=None,
states=None,
default_state=None,
interval=1,
network_params=None,
seed=None,
topology=None,
def __init__(self,
env_id='unnamed_env',
seed='default',
schedule=None,
initial_time=0,
environment_params=None,
history=True,
dir_path=None,
**kwargs):
interval=1,
agents: Dict[str, config.AgentConfig] = {},
topologies: Dict[str, config.NetConfig] = {},
agent_reporters: Optional[Any] = None,
model_reporters: Optional[Any] = None,
tables: Optional[Any] = None,
**env_params):
super().__init__()
self.current_id = -1
self.seed = '{}_{}'.format(seed, env_id)
self.id = env_id
self.dir_path = dir_path or os.getcwd()
self.schedule = schedule
if schedule is None:
self.schedule = time.TimedActivation()
schedule = time.TimedActivation()
self.schedule = schedule
self.name = name or 'UnnamedEnvironment'
seed = seed or current_time()
random.seed(seed)
if isinstance(states, list):
states = dict(enumerate(states))
self.states = deepcopy(states) if states else {}
self.default_state = deepcopy(default_state) or {}
if topology is None:
network_params = network_params or {}
topology = serialization.load_network(network_params,
dir_path=dir_path)
if not topology:
topology = nx.Graph()
self.G = nx.Graph(topology)
self.environment_params = environment_params or {}
self.environment_params.update(kwargs)
self.topologies = {}
self._node_ids = {}
for (name, cfg) in topologies.items():
self.set_topology(cfg=cfg,
graph=name)
self.agents = agents or {}
self.env_params = env_params or {}
self._env_agents = {}
self.interval = interval
if history:
history = History
else:
history = NoHistory
self._history = history(name=self.name,
backup=True)
self['SEED'] = seed
if network_agents:
distro = agents.calculate_distribution(network_agents)
self.network_agents = agents._convert_agent_types(distro)
else:
self.network_agents = []
self.logger = utils.logger.getChild(self.id)
self.datacollector = DataCollector(model_reporters, agent_reporters, tables)
environment_agents = environment_agents or []
if environment_agents:
distro = agents.calculate_distribution(environment_agents)
environment_agents = agents._convert_agent_types(distro)
self.environment_agents = environment_agents
@property
def topology(self):
return self.topologies['default']
self.logger = utils.logger.getChild(self.name)
@property
def network_agents(self):
yield from self.agents(agent_class=agents.NetworkAgent)
@staticmethod
def from_config(conf: config.Config, trial_id, **kwargs) -> Environment:
'''Create an environment for a trial of the simulation'''
conf = conf
if kwargs:
conf = config.Config(**conf.dict(exclude_defaults=True), **kwargs)
seed = '{}_{}'.format(conf.general.seed, trial_id)
id = '{}_trial_{}'.format(conf.general.id, trial_id).replace('.', '-')
opts = conf.environment.params.copy()
dir_path = conf.general.dir_path
opts.update(conf)
opts.update(kwargs)
env = serialization.deserialize(conf.environment.environment_class)(env_id=id, seed=seed, dir_path=dir_path, **opts)
return env
@property
def now(self):
@@ -110,68 +108,91 @@ class Environment(Model):
return self.schedule.time
raise Exception('The environment has not been scheduled, so it has no sense of time')
def topology_for(self, agent_id):
return self.topologies[self._node_ids[agent_id][0]]
def node_id_for(self, agent_id):
return self._node_ids[agent_id][1]
def set_topology(self, cfg=None, dir_path=None, graph='default'):
topology = cfg
if not isinstance(cfg, nx.Graph):
topology = network.from_config(cfg, dir_path=dir_path or self.dir_path)
self.topologies[graph] = topology
@property
def agents(self):
yield from self.environment_agents
yield from self.network_agents
return agents.AgentView(self._agents)
def count_agents(self, *args, **kwargs):
return sum(1 for i in self.find_all(*args, **kwargs))
@property
def environment_agents(self):
for ref in self._env_agents.values():
yield ref
def find_all(self, *args, **kwargs):
return agents.AgentView(self._agents).filter(*args, **kwargs)
@environment_agents.setter
def environment_agents(self, environment_agents):
self._environment_agents = environment_agents
def find_one(self, *args, **kwargs):
return agents.AgentView(self._agents).one(*args, **kwargs)
self._env_agents = agents._definition_to_dict(definition=environment_agents)
@agents.setter
def agents(self, agents_def: Dict[str, config.AgentConfig]):
self._agents = agents.from_config(agents_def, env=self)
for d in self._agents.values():
for a in d.values():
self.schedule.add(a)
@property
def network_agents(self):
for i in self.G.nodes():
node = self.G.nodes[i]
if 'agent' in node:
yield node['agent']
@network_agents.setter
def network_agents(self, network_agents):
self._network_agents = network_agents
for ix in self.G.nodes():
self.init_agent(ix, agent_definitions=network_agents)
def init_agent(self, agent_id, agent_definitions):
node = self.G.nodes[agent_id]
def init_agent(self, agent_id, agent_definitions, graph='default'):
node = self.topologies[graph].nodes[agent_id]
init = False
state = dict(node)
agent_type = None
if 'agent_type' in self.states.get(agent_id, {}):
agent_type = self.states[agent_id]['agent_type']
elif 'agent_type' in node:
agent_type = node['agent_type']
elif 'agent_type' in self.default_state:
agent_type = self.default_state['agent_type']
agent_class = None
if 'agent_class' in self.states.get(agent_id, {}):
agent_class = self.states[agent_id]['agent_class']
elif 'agent_class' in node:
agent_class = node['agent_class']
elif 'agent_class' in self.default_state:
agent_class = self.default_state['agent_class']
if agent_type:
agent_type = agents.deserialize_type(agent_type)
if agent_class:
agent_class = agents.deserialize_type(agent_class)
elif agent_definitions:
agent_type, state = agents._agent_from_definition(agent_definitions, unique_id=agent_id)
agent_class, state = agents._agent_from_definition(agent_definitions, unique_id=agent_id)
else:
serialization.logger.debug('Skipping node {}'.format(agent_id))
return
return self.set_agent(agent_id, agent_type, state)
return self.set_agent(agent_id, agent_class, state)
def set_agent(self, agent_id, agent_type, state=None):
node = self.G.nodes[agent_id]
def agent_to_node(self, agent_id, graph_name='default', node_id=None, shuffle=False):
#TODO: test
if node_id is None:
G = self.topologies[graph_name]
candidates = list(G.nodes(data=True))
if shuffle:
random.shuffle(candidates)
for next_id, data in candidates:
if data.get('agent_id', None) is None:
node_id = next_id
data['agent_id'] = agent_id
break
self._node_ids[agent_id] = (graph_name, node_id)
print(self._node_ids)
def set_agent(self, agent_id, agent_class, state=None, graph='default'):
node = self.topologies[graph].nodes[agent_id]
defstate = deepcopy(self.default_state) or {}
defstate.update(self.states.get(agent_id, {}))
defstate.update(node.get('state', {}))
if state:
defstate.update(state)
a = None
if agent_type:
if agent_class:
state = defstate
a = agent_type(model=self,
a = agent_class(model=self,
unique_id=agent_id
)
@@ -182,20 +203,20 @@ class Environment(Model):
self.schedule.add(a)
return a
def add_node(self, agent_type, state=None):
agent_id = int(len(self.G.nodes()))
self.G.add_node(agent_id)
a = self.set_agent(agent_id, agent_type, state)
def add_node(self, agent_class, state=None, graph='default'):
agent_id = int(len(self.topologies[graph].nodes()))
self.topologies[graph].add_node(agent_id)
a = self.set_agent(agent_id, agent_class, state, graph=graph)
a['visible'] = True
return a
def add_edge(self, agent1, agent2, start=None, **attrs):
def add_edge(self, agent1, agent2, start=None, graph='default', **attrs):
if hasattr(agent1, 'id'):
agent1 = agent1.id
if hasattr(agent2, 'id'):
agent2 = agent2.id
start = start or self.now
return self.G.add_edge(agent1, agent2, **attrs)
return self.topologies[graph].add_edge(agent1, agent2, **attrs)
def log(self, message, *args, level=logging.INFO, **kwargs):
if not self.logger.isEnabledFor(level):
@@ -206,184 +227,67 @@ class Environment(Model):
message += " {k}={v} ".format(k, v)
extra = {}
extra['now'] = self.now
extra['unique_id'] = self.name
extra['unique_id'] = self.id
return self.logger.log(level, message, extra=extra)
def step(self):
'''
Advance one step in the simulation, and update the data collection and scheduler appropriately
'''
super().step()
self.schedule.step()
self.datacollector.collect(self)
def run(self, until, *args, **kwargs):
self._save_state()
until = until or float('inf')
while self.schedule.next_time < until:
self.step()
utils.logger.debug(f'Simulation step {self.schedule.time}/{until}. Next: {self.schedule.next_time}')
self.schedule.time = until
self._history.flush_cache()
def _save_state(self, now=None):
serialization.logger.debug('Saving state @{}'.format(self.now))
self._history.save_records(self.state_to_tuples(now=now))
def __getitem__(self, key):
if isinstance(key, tuple):
self._history.flush_cache()
return self._history[key]
return self.environment_params[key]
def __setitem__(self, key, value):
if isinstance(key, tuple):
k = Key(*key)
self._history.save_record(*k,
value=value)
return
self.environment_params[key] = value
self._history.save_record(dict_id='env',
t_step=self.now,
key=key,
value=value)
def __contains__(self, key):
return key in self.environment_params
return key in self.env_params
def get(self, key, default=None):
'''
Get the value of an environment attribute in a
given point in the simulation (history).
If key is an attribute name, this method returns
the current value.
To get values at other times, use a
:meth: `soil.history.Key` tuple.
Get the value of an environment attribute.
Return `default` if the value is not set.
'''
return self[key] if key in self else default
return self.env_params.get(key, default)
def get_agent(self, agent_id):
return self.G.nodes[agent_id]['agent']
def __getitem__(self, key):
return self.env_params.get(key)
def get_agents(self, nodes=None):
if nodes is None:
return self.agents
return (self.G.nodes[i]['agent'] for i in nodes)
def __setitem__(self, key, value):
return self.env_params.__setitem__(key, value)
def dump_csv(self, f):
with utils.open_or_reuse(f, 'w') as f:
cr = csv.writer(f)
cr.writerow(('agent_id', 't_step', 'key', 'value'))
for i in self.history_to_tuples():
cr.writerow(i)
def dump_gexf(self, f):
G = self.history_to_graph()
# Workaround for geometric models
# See soil/soil#4
for node in G.nodes():
if 'pos' in G.nodes[node]:
G.nodes[node]['viz'] = {"position": {"x": G.nodes[node]['pos'][0], "y": G.nodes[node]['pos'][1], "z": 0.0}}
del (G.nodes[node]['pos'])
nx.write_gexf(G, f, version="1.2draft")
def dump(self, *args, formats=None, **kwargs):
if not formats:
return
functions = {
'csv': self.dump_csv,
'gexf': self.dump_gexf
}
for f in formats:
if f in functions:
functions[f](*args, **kwargs)
else:
raise ValueError('Unknown format: {}'.format(f))
def dump_sqlite(self, f):
return self._history.dump(f)
def state_to_tuples(self, now=None):
def _agent_to_tuples(self, agent, now=None):
if now is None:
now = self.now
for k, v in self.environment_params.items():
for k, v in agent.state.items():
yield Record(dict_id=agent.id,
t_step=now,
key=k,
value=v)
def state_to_tuples(self, agent_id=None, now=None):
if now is None:
now = self.now
if agent_id:
agent = self.agents[agent_id]
yield from self._agent_to_tuples(agent, now)
return
for k, v in self.env_params.items():
yield Record(dict_id='env',
t_step=now,
key=k,
value=v)
for agent in self.agents:
for k, v in agent.state.items():
yield Record(dict_id=agent.id,
t_step=now,
key=k,
value=v)
yield from self._agent_to_tuples(agent, now)
def history_to_tuples(self):
return self._history.to_tuples()
def history_to_graph(self):
G = nx.Graph(self.G)
for agent in self.network_agents:
attributes = {'agent': str(agent.__class__)}
lastattributes = {}
spells = []
lastvisible = False
laststep = None
history = self[agent.id, None, None]
if not history:
continue
for t_step, attribute, value in sorted(list(history)):
if attribute == 'visible':
nowvisible = value
if nowvisible and not lastvisible:
laststep = t_step
if not nowvisible and lastvisible:
spells.append((laststep, t_step))
lastvisible = nowvisible
continue
key = 'attr_' + attribute
if key not in attributes:
attributes[key] = list()
if key not in lastattributes:
lastattributes[key] = (value, t_step)
elif lastattributes[key][0] != value:
last_value, laststep = lastattributes[key]
commit_value = (last_value, laststep, t_step)
if key not in attributes:
attributes[key] = list()
attributes[key].append(commit_value)
lastattributes[key] = (value, t_step)
for k, v in lastattributes.items():
attributes[k].append((v[0], v[1], None))
if lastvisible:
spells.append((laststep, None))
if spells:
G.add_node(agent.id, spells=spells, **attributes)
else:
G.add_node(agent.id, **attributes)
return G
def __getstate__(self):
state = {}
for prop in _CONFIG_PROPS:
state[prop] = self.__dict__[prop]
state['G'] = json_graph.node_link_data(self.G)
state['environment_agents'] = self._env_agents
state['history'] = self._history
state['schedule'] = self.schedule
return state
def __setstate__(self, state):
for prop in _CONFIG_PROPS:
self.__dict__[prop] = state[prop]
self._env_agents = state['environment_agents']
self.G = json_graph.node_link_graph(state['G'])
self._history = state['history']
# self._env = None
self.schedule = state['schedule']
self._queue = []
SoilEnvironment = Environment

View File

@@ -1,7 +1,8 @@
import os
import csv as csvlib
import time
from time import time as current_time
from io import BytesIO
from sqlalchemy import create_engine
import matplotlib.pyplot as plt
import networkx as nx
@@ -48,20 +49,24 @@ class Exporter:
self.simulation = simulation
outdir = outdir or os.path.join(os.getcwd(), 'soil_output')
self.outdir = os.path.join(outdir,
simulation.group or '',
simulation.name)
simulation.config.general.group or '',
simulation.config.general.id)
self.dry_run = dry_run
self.copy_to = copy_to
def start(self):
def sim_start(self):
'''Method to call when the simulation starts'''
pass
def end(self, stats):
def sim_end(self):
'''Method to call when the simulation ends'''
pass
def trial(self, env, stats):
def trial_start(self, env):
'''Method to call when a trial start'''
pass
def trial_end(self, env):
'''Method to call when a trial ends'''
pass
@@ -80,79 +85,148 @@ class Exporter:
class default(Exporter):
'''Default exporter. Writes sqlite results, as well as the simulation YAML'''
def start(self):
if not self.dry_run:
logger.info('Dumping results to %s', self.outdir)
self.simulation.dump_yaml(outdir=self.outdir)
else:
logger.info('NOT dumping results')
# def sim_start(self):
# if not self.dry_run:
# logger.info('Dumping results to %s', self.outdir)
# self.simulation.dump_yaml(outdir=self.outdir)
# else:
# logger.info('NOT dumping results')
def trial(self, env, stats):
if not self.dry_run:
with timer('Dumping simulation {} trial {}'.format(self.simulation.name,
env.name)):
with self.output('{}.sqlite'.format(env.name), mode='wb') as f:
env.dump_sqlite(f)
# def trial_start(self, env, stats):
# if not self.dry_run:
# with timer('Dumping simulation {} trial {}'.format(self.simulation.name,
# env.name)):
# engine = create_engine('sqlite:///{}.sqlite'.format(env.name), echo=False)
def end(self, stats):
with timer('Dumping simulation {}\'s stats'.format(self.simulation.name)):
with self.output('{}.sqlite'.format(self.simulation.name), mode='wb') as f:
self.simulation.dump_sqlite(f)
# dc = env.datacollector
# tables = {'env': dc.get_model_vars_dataframe(),
# 'agents': dc.get_agent_vars_dataframe(),
# 'agents': dc.get_agent_vars_dataframe()}
# for table in dc.tables:
# tables[table] = dc.get_table_dataframe(table)
# for (t, df) in tables.items():
# df.to_sql(t, con=engine)
# def sim_end(self, stats):
# with timer('Dumping simulation {}\'s stats'.format(self.simulation.name)):
# engine = create_engine('sqlite:///{}.sqlite'.format(self.simulation.name), echo=False)
# with self.output('{}.sqlite'.format(self.simulation.name), mode='wb') as f:
# self.simulation.dump_sqlite(f)
def get_dc_dfs(dc):
dfs = {'env': dc.get_model_vars_dataframe(),
'agents': dc.get_agent_vars_dataframe }
for table_name in dc.tables:
dfs[table_name] = dc.get_table_dataframe(table_name)
yield from dfs.items()
class csv(Exporter):
'''Export the state of each environment (and its agents) in a separate CSV file'''
def trial(self, env, stats):
def trial_end(self, env):
with timer('[CSV] Dumping simulation {} trial {} @ dir {}'.format(self.simulation.name,
env.name,
env.id,
self.outdir)):
with self.output('{}.csv'.format(env.name)) as f:
env.dump_csv(f)
with self.output('{}.stats.csv'.format(env.name)) as f:
statwriter = csvlib.writer(f, delimiter='\t', quotechar='"', quoting=csvlib.QUOTE_ALL)
for stat in stats:
statwriter.writerow(stat)
for (df_name, df) in get_dc_dfs(env.datacollector):
with self.output('{}.stats.{}.csv'.format(env.id, df_name)) as f:
df.to_csv(f)
class gexf(Exporter):
def trial(self, env, stats):
def trial_end(self, env):
if self.dry_run:
logger.info('Not dumping GEXF in dry_run mode')
return
with timer('[GEXF] Dumping simulation {} trial {}'.format(self.simulation.name,
env.name)):
with self.output('{}.gexf'.format(env.name), mode='wb') as f:
env.dump_gexf(f)
env.id)):
with self.output('{}.gexf'.format(env.id), mode='wb') as f:
self.dump_gexf(env, f)
def dump_gexf(self, env, f):
G = env.history_to_graph()
# Workaround for geometric models
# See soil/soil#4
for node in G.nodes():
if 'pos' in G.nodes[node]:
G.nodes[node]['viz'] = {"position": {"x": G.nodes[node]['pos'][0], "y": G.nodes[node]['pos'][1], "z": 0.0}}
del (G.nodes[node]['pos'])
nx.write_gexf(G, f, version="1.2draft")
class dummy(Exporter):
def start(self):
def sim_start(self):
with self.output('dummy', 'w') as f:
f.write('simulation started @ {}\n'.format(time.time()))
f.write('simulation started @ {}\n'.format(current_time()))
def trial(self, env, stats):
def trial_start(self, env):
with self.output('dummy', 'w') as f:
for i in env.history_to_tuples():
f.write(','.join(map(str, i)))
f.write('\n')
f.write('trial started@ {}\n'.format(current_time()))
def sim(self, stats):
def trial_end(self, env):
with self.output('dummy', 'w') as f:
f.write('trial ended@ {}\n'.format(current_time()))
def sim_end(self):
with self.output('dummy', 'a') as f:
f.write('simulation ended @ {}\n'.format(time.time()))
f.write('simulation ended @ {}\n'.format(current_time()))
class graphdrawing(Exporter):
def trial(self, env, stats):
def trial_end(self, env):
# Outside effects
f = plt.figure()
nx.draw(env.G, node_size=10, width=0.2, pos=nx.spring_layout(env.G, scale=100), ax=f.add_subplot(111))
with open('graph-{}.png'.format(env.name)) as f:
with open('graph-{}.png'.format(env.id)) as f:
f.savefig(f)
'''
Convert an environment into a NetworkX graph
'''
def env_to_graph(env, history=None):
G = nx.Graph(env.G)
for agent in env.network_agents:
attributes = {'agent': str(agent.__class__)}
lastattributes = {}
spells = []
lastvisible = False
laststep = None
if not history:
history = sorted(list(env.state_to_tuples()))
for _, t_step, attribute, value in history:
if attribute == 'visible':
nowvisible = value
if nowvisible and not lastvisible:
laststep = t_step
if not nowvisible and lastvisible:
spells.append((laststep, t_step))
lastvisible = nowvisible
continue
key = 'attr_' + attribute
if key not in attributes:
attributes[key] = list()
if key not in lastattributes:
lastattributes[key] = (value, t_step)
elif lastattributes[key][0] != value:
last_value, laststep = lastattributes[key]
commit_value = (last_value, laststep, t_step)
if key not in attributes:
attributes[key] = list()
attributes[key].append(commit_value)
lastattributes[key] = (value, t_step)
for k, v in lastattributes.items():
attributes[k].append((v[0], v[1], None))
if lastvisible:
spells.append((laststep, None))
if spells:
G.add_node(agent.id, spells=spells, **attributes)
else:
G.add_node(agent.id, **attributes)
return G

42
soil/network.py Normal file
View File

@@ -0,0 +1,42 @@
from typing import Dict
import os
import sys
import networkx as nx
from . import config, serialization, basestring
def from_config(cfg: config.NetConfig, dir_path: str = None):
if not isinstance(cfg, config.NetConfig):
cfg = config.NetConfig(**cfg)
if cfg.path:
path = cfg.path
if dir_path and not os.path.isabs(path):
path = os.path.join(dir_path, path)
extension = os.path.splitext(path)[1][1:]
kwargs = {}
if extension == 'gexf':
kwargs['version'] = '1.2draft'
kwargs['node_type'] = int
try:
method = getattr(nx.readwrite, 'read_' + extension)
except AttributeError:
raise AttributeError('Unknown format')
return method(path, **kwargs)
if cfg.params:
net_args = cfg.params.dict()
net_gen = net_args.pop('generator')
if dir_path not in sys.path:
sys.path.append(dir_path)
method = serialization.deserializer(net_gen,
known_modules=['networkx.generators',])
return method(**net_args)
if isinstance(cfg.topology, basestring) or isinstance(cfg.topology, dict):
return nx.json_graph.node_link_graph(cfg.topology)
return nx.Graph()

View File

@@ -2,6 +2,7 @@ import os
import logging
import ast
import sys
import re
import importlib
from glob import glob
from itertools import product, chain
@@ -15,38 +16,39 @@ from jinja2 import Template
logger = logging.getLogger('soil')
def load_network(network_params, dir_path=None):
G = nx.Graph()
# def load_network(network_params, dir_path=None):
# G = nx.Graph()
if 'path' in network_params:
path = network_params['path']
if dir_path and not os.path.isabs(path):
path = os.path.join(dir_path, path)
extension = os.path.splitext(path)[1][1:]
kwargs = {}
if extension == 'gexf':
kwargs['version'] = '1.2draft'
kwargs['node_type'] = int
try:
method = getattr(nx.readwrite, 'read_' + extension)
except AttributeError:
raise AttributeError('Unknown format')
G = method(path, **kwargs)
# if not network_params:
# return G
elif 'generator' in network_params:
net_args = network_params.copy()
net_gen = net_args.pop('generator')
# if 'path' in network_params:
# path = network_params['path']
# if dir_path and not os.path.isabs(path):
# path = os.path.join(dir_path, path)
# extension = os.path.splitext(path)[1][1:]
# kwargs = {}
# if extension == 'gexf':
# kwargs['version'] = '1.2draft'
# kwargs['node_type'] = int
# try:
# method = getattr(nx.readwrite, 'read_' + extension)
# except AttributeError:
# raise AttributeError('Unknown format')
# G = method(path, **kwargs)
if dir_path not in sys.path:
sys.path.append(dir_path)
# elif 'generator' in network_params:
# net_args = network_params.copy()
# net_gen = net_args.pop('generator')
method = deserializer(net_gen,
known_modules=['networkx.generators',])
G = method(**net_args)
return G
# if dir_path not in sys.path:
# sys.path.append(dir_path)
# method = deserializer(net_gen,
# known_modules=['networkx.generators',])
# G = method(**net_args)
# return G
def load_file(infile):
@@ -120,8 +122,8 @@ def load_files(*patterns, **kwargs):
for i in glob(pattern, **kwargs):
for config in load_file(i):
path = os.path.abspath(i)
if 'dir_path' not in config:
config['dir_path'] = os.path.dirname(path)
if 'general' in config and 'dir_path' not in config['general']:
config['general']['dir_path'] = os.path.dirname(path)
yield config, path
@@ -134,7 +136,9 @@ def load_config(config):
builtins = importlib.import_module('builtins')
def name(value, known_modules=[]):
KNOWN_MODULES = ['soil', ]
def name(value, known_modules=KNOWN_MODULES):
'''Return a name that can be imported, to serialize/deserialize an object'''
if value is None:
return 'None'
@@ -163,13 +167,16 @@ def serializer(type_):
return lambda x: x
def serialize(v, known_modules=[]):
def serialize(v, known_modules=KNOWN_MODULES):
'''Get a text representation of an object.'''
tname = name(v, known_modules=known_modules)
func = serializer(tname)
return func(v), tname
def deserializer(type_, known_modules=[]):
IS_CLASS = re.compile(r"<class '(.*)'>")
def deserializer(type_, known_modules=KNOWN_MODULES):
if type(type_) != str: # Already deserialized
return type_
if type_ == 'str':
@@ -179,17 +186,23 @@ def deserializer(type_, known_modules=[]):
if hasattr(builtins, type_): # Check if it's a builtin type
cls = getattr(builtins, type_)
return lambda x=None: ast.literal_eval(x) if x is not None else cls()
match = IS_CLASS.match(type_)
if match:
modname, tname = match.group(1).rsplit(".", 1)
module = importlib.import_module(modname)
cls = getattr(module, tname)
return getattr(cls, 'deserialize', cls)
# Otherwise, see if we can find the module and the class
modules = known_modules or []
options = []
for mod in modules:
for mod in known_modules:
if mod:
options.append((mod, type_))
if '.' in type_: # Fully qualified module
module, type_ = type_.rsplit(".", 1)
options.append ((module, type_))
options.append((module, type_))
errors = []
for modname, tname in options:
@@ -212,11 +225,11 @@ def deserialize(type_, value=None, **kwargs):
return des(value)
def deserialize_all(names, *args, known_modules=['soil'], **kwargs):
'''Return the set of exporters for a simulation, given the exporter names'''
exporters = []
def deserialize_all(names, *args, known_modules=KNOWN_MODULES, **kwargs):
'''Return the list of deserialized objects'''
objects = []
for name in names:
mod = deserialize(name, known_modules=known_modules)
exporters.append(mod(*args, **kwargs))
return exporters
objects.append(mod(*args, **kwargs))
return objects

View File

@@ -1,4 +1,5 @@
import os
from time import time as current_time, strftime
import importlib
import sys
import yaml
@@ -6,135 +7,46 @@ import traceback
import logging
import networkx as nx
from time import strftime
from networkx.readwrite import json_graph
from multiprocessing import Pool
from functools import partial
from tsih import History
import pickle
from . import serialization, utils, basestring, agents
from .environment import Environment
from .utils import logger
from .exporters import default
from .stats import defaultStats
from .config import Config, convert_old
#TODO: change documentation for simulation
class Simulation:
"""
Similar to nsim.NetworkSimulation with three main differences:
1) agent type can be specified by name or by class.
2) instead of just one type, a network agents distribution can be used.
The distribution specifies the weight (or probability) of each
agent type in the topology. This is an example distribution: ::
[
{'agent_type': 'agent_type_1',
'weight': 0.2,
'state': {
'id': 0
}
},
{'agent_type': 'agent_type_2',
'weight': 0.8,
'state': {
'id': 1
}
}
]
In this example, 20% of the nodes will be marked as type
'agent_type_1'.
3) if no initial state is given, each node's state will be set
to `{'id': 0}`.
Parameters
---------
name : str, optional
config (optional): :class:`config.Config`
name of the Simulation
group : str, optional
a group name can be used to link simulations
topology : networkx.Graph instance, optional
network_params : dict
parameters used to create a topology with networkx, if no topology is given
network_agents : dict
definition of agents to populate the topology with
agent_type : NetworkAgent subclass, optional
Default type of NetworkAgent to use for nodes not specified in network_agents
states : list, optional
List of initial states corresponding to the nodes in the topology. Basic form is a list of integers
whose value indicates the state
dir_path: str, optional
Directory path to load simulation assets (files, modules...)
seed : str, optional
Seed to use for the random generator
num_trials : int, optional
Number of independent simulation runs
max_time : int, optional
Time how long the simulation should run
environment_params : dict, optional
Dictionary of globally-shared environmental parameters
environment_agents: dict, optional
Similar to network_agents. Distribution of Agents that control the environment
environment_class: soil.environment.Environment subclass, optional
Class for the environment. It defailts to soil.environment.Environment
load_module : str, module name, deprecated
If specified, soil will load the content of this module under 'soil.agents.custom'
kwargs: parameters to use to initialize a new configuration, if one has not been provided.
"""
def __init__(self, name=None, group=None, topology=None, network_params=None,
network_agents=None, agent_type=None, states=None,
default_state=None, interval=1, num_trials=1,
max_time=100, load_module=None, seed=None,
dir_path=None, environment_agents=None,
environment_params=None, environment_class=None,
def __init__(self, config=None,
**kwargs):
self.load_module = load_module
self.network_params = network_params
self.name = name or 'Unnamed'
self.seed = str(seed or name)
self._id = '{}_{}'.format(self.name, strftime("%Y-%m-%d_%H.%M.%S"))
self.group = group or ''
self.num_trials = num_trials
self.max_time = max_time
self.default_state = default_state or {}
self.dir_path = dir_path or os.getcwd()
self.interval = interval
sys.path += list(x for x in [os.getcwd(), self.dir_path] if x not in sys.path)
if topology is None:
topology = serialization.load_network(network_params,
dir_path=self.dir_path)
elif isinstance(topology, basestring) or isinstance(topology, dict):
topology = json_graph.node_link_graph(topology)
self.topology = nx.Graph(topology)
if kwargs:
cfg = {}
if config:
cfg.update(config.dict(include_defaults=False))
cfg.update(kwargs)
config = Config(**cfg)
if not config:
raise ValueError("You need to specify a simulation configuration")
self.config = config
self.environment_params = environment_params or {}
self.environment_class = serialization.deserialize(environment_class,
known_modules=['soil.environment', ]) or Environment
environment_agents = environment_agents or []
self.environment_agents = agents._convert_agent_types(environment_agents,
known_modules=[self.load_module])
distro = agents.calculate_distribution(network_agents,
agent_type)
self.network_agents = agents._convert_agent_types(distro,
known_modules=[self.load_module])
self.states = agents._validate_states(states,
self.topology)
self._history = History(name=self.name,
backup=False)
@property
def name(self) -> str:
return self.config.general.id
def run_simulation(self, *args, **kwargs):
return self.run(*args, **kwargs)
@@ -147,19 +59,19 @@ class Simulation:
if parallel and not os.environ.get('SENPY_DEBUG', None):
p = Pool()
func = partial(self.run_trial_exceptions, **kwargs)
for i in p.imap_unordered(func, range(self.num_trials)):
for i in p.imap_unordered(func, range(self.config.general.num_trials)):
if isinstance(i, Exception):
logger.error('Trial failed:\n\t%s', i.message)
continue
yield i
else:
for i in range(self.num_trials):
for i in range(self.config.general.num_trials):
yield self.run_trial(trial_id=i,
**kwargs)
def run_gen(self, parallel=False, dry_run=False,
exporters=[default, ], stats=[], outdir=None, exporter_params={},
stats_params={}, log_level=None,
exporters=[default, ], outdir=None, exporter_params={},
log_level=None,
**kwargs):
'''Run the simulation and yield the resulting environments.'''
if log_level:
@@ -172,85 +84,63 @@ class Simulation:
dry_run=dry_run,
outdir=outdir,
**exporter_params)
stats = serialization.deserialize_all(simulation=self,
names=stats,
known_modules=['soil.stats',],
**stats_params)
with utils.timer('simulation {}'.format(self.name)):
for stat in stats:
stat.start()
with utils.timer('simulation {}'.format(self.config.general.id)):
for exporter in exporters:
exporter.start()
exporter.sim_start()
for env in self._run_sync_or_async(parallel=parallel,
log_level=log_level,
**kwargs):
collected = list(stat.trial(env) for stat in stats)
saved = self.save_stats(collected, t_step=env.now, trial_id=env.name)
for exporter in exporters:
exporter.trial_start(env)
for exporter in exporters:
exporter.trial(env, saved)
exporter.trial_end(env)
yield env
collected = list(stat.end() for stat in stats)
saved = self.save_stats(collected)
for exporter in exporters:
exporter.end(saved)
def save_stats(self, collection, **kwargs):
stats = dict(kwargs)
for stat in collection:
stats.update(stat)
self._history.save_stats(utils.flatten_dict(stats))
return stats
def get_stats(self, **kwargs):
return self._history.get_stats(**kwargs)
def log_stats(self, stats):
logger.info('Stats: \n{}'.format(yaml.dump(stats, default_flow_style=False)))
exporter.sim_end()
def get_env(self, trial_id=0, **kwargs):
'''Create an environment for a trial of the simulation'''
opts = self.environment_params.copy()
opts.update({
'name': '{}_trial_{}'.format(self.name, trial_id),
'topology': self.topology.copy(),
'network_params': self.network_params,
'seed': '{}_trial_{}'.format(self.seed, trial_id),
'initial_time': 0,
'interval': self.interval,
'network_agents': self.network_agents,
'initial_time': 0,
'states': self.states,
'dir_path': self.dir_path,
'default_state': self.default_state,
'environment_agents': self.environment_agents,
})
opts.update(kwargs)
env = self.environment_class(**opts)
# opts = self.environment_params.copy()
# opts.update({
# 'name': '{}_trial_{}'.format(self.name, trial_id),
# 'topology': self.topology.copy(),
# 'network_params': self.network_params,
# 'seed': '{}_trial_{}'.format(self.seed, trial_id),
# 'initial_time': 0,
# 'interval': self.interval,
# 'network_agents': self.network_agents,
# 'initial_time': 0,
# 'states': self.states,
# 'dir_path': self.dir_path,
# 'default_state': self.default_state,
# 'history': bool(self._history),
# 'environment_agents': self.environment_agents,
# })
# opts.update(kwargs)
print(self.config)
env = Environment.from_config(self.config, trial_id=trial_id, **kwargs)
return env
def run_trial(self, trial_id=0, until=None, log_level=logging.INFO, **opts):
def run_trial(self, trial_id=None, until=None, log_level=logging.INFO, **opts):
"""
Run a single trial of the simulation
"""
trial_id = trial_id if trial_id is not None else current_time()
if log_level:
logger.setLevel(log_level)
# Set-up trial environment and graph
until = until or self.max_time
env = self.get_env(trial_id=trial_id, **opts)
until = until or self.config.general.max_time
env = self.get_env(trial_id, **opts)
# Set up agents on nodes
with utils.timer('Simulation {} trial {}'.format(self.name, trial_id)):
with utils.timer('Simulation {} trial {}'.format(self.config.general.id, trial_id)):
env.run(until)
return env
@@ -268,88 +158,41 @@ class Simulation:
return ex
def to_dict(self):
return self.__getstate__()
return self.config.dict()
def to_yaml(self):
return yaml.dump(self.to_dict())
def dump_yaml(self, f=None, outdir=None):
if not f and not outdir:
raise ValueError('specify a file or an output directory')
if not f:
f = os.path.join(outdir, '{}.dumped.yml'.format(self.name))
with utils.open_or_reuse(f, 'w') as f:
f.write(self.to_yaml())
def dump_pickle(self, f=None, outdir=None):
if not outdir and not f:
raise ValueError('specify a file or an output directory')
if not f:
f = os.path.join(outdir,
'{}.simulation.pickle'.format(self.name))
with utils.open_or_reuse(f, 'wb') as f:
pickle.dump(self, f)
def dump_sqlite(self, f):
return self._history.dump(f)
def __getstate__(self):
state={}
for k, v in self.__dict__.items():
if k[0] != '_':
state[k] = v
state['topology'] = json_graph.node_link_data(self.topology)
state['network_agents'] = agents.serialize_definition(self.network_agents,
known_modules = [])
state['environment_agents'] = agents.serialize_definition(self.environment_agents,
known_modules = [])
state['environment_class'] = serialization.serialize(self.environment_class,
known_modules=['soil.environment'])[1] # func, name
if state['load_module'] is None:
del state['load_module']
return state
def __setstate__(self, state):
self.__dict__ = state
self.load_module = getattr(self, 'load_module', None)
if self.dir_path not in sys.path:
sys.path += [self.dir_path, os.getcwd()]
self.topology = json_graph.node_link_graph(state['topology'])
self.network_agents = agents.calculate_distribution(agents._convert_agent_types(self.network_agents))
self.environment_agents = agents._convert_agent_types(self.environment_agents,
known_modules=[self.load_module])
self.environment_class = serialization.deserialize(self.environment_class,
known_modules=[self.load_module, 'soil.environment', ]) # func, name
return yaml.dump(self.config.dict())
def all_from_config(config):
configs = list(serialization.load_config(config))
for config, _ in configs:
sim = Simulation(**config)
for config, path in configs:
if config.get('version', '1') == '1':
config = convert_old(config)
if not isinstance(config, Config):
config = Config(**config)
if not config.general.dir_path:
config.general.dir_path = os.path.dirname(path)
sim = Simulation(config=config)
yield sim
def from_config(conf_or_path):
lst = list(all_from_config(conf_or_path))
if len(lst) > 1:
raise AttributeError('Provide only one configuration')
return lst[0]
def from_old_config(conf_or_path):
config = list(serialization.load_config(conf_or_path))
if len(config) > 1:
raise AttributeError('Provide only one configuration')
config = config[0][0]
sim = Simulation(**config)
return sim
config = convert_old(config[0][0])
return Simulation(config)
def run_from_config(*configs, **kwargs):
for config_def in configs:
# logger.info("Found {} config(s)".format(len(ls)))
for config, path in serialization.load_config(config_def):
name = config.get('name', 'unnamed')
logger.info("Using config(s): {name}".format(name=name))
dir_path = config.pop('dir_path', os.path.dirname(path))
sim = Simulation(dir_path=dir_path,
**config)
sim.run_simulation(**kwargs)
for sim in all_from_config(configs):
name = config.general.id
logger.info("Using config(s): {name}".format(name=name))
sim.run_simulation(**kwargs)

View File

@@ -1,106 +0,0 @@
import pandas as pd
from collections import Counter
class Stats:
'''
Interface for all stats. It is not necessary, but it is useful
if you don't plan to implement all the methods.
'''
def __init__(self, simulation):
self.simulation = simulation
def start(self):
'''Method to call when the simulation starts'''
pass
def end(self):
'''Method to call when the simulation ends'''
return {}
def trial(self, env):
'''Method to call when a trial ends'''
return {}
class distribution(Stats):
'''
Calculate the distribution of agent states at the end of each trial,
the mean value, and its deviation.
'''
def start(self):
self.means = []
self.counts = []
def trial(self, env):
df = env[None, None, None].df()
df = df.drop('SEED', axis=1)
ix = df.index[-1]
attrs = df.columns.get_level_values(0)
vc = {}
stats = {
'mean': {},
'count': {},
}
for a in attrs:
t = df.loc[(ix, a)]
try:
stats['mean'][a] = t.mean()
self.means.append(('mean', a, t.mean()))
except TypeError:
pass
for name, count in t.value_counts().iteritems():
if a not in stats['count']:
stats['count'][a] = {}
stats['count'][a][name] = count
self.counts.append(('count', a, name, count))
return stats
def end(self):
dfm = pd.DataFrame(self.means, columns=['metric', 'key', 'value'])
dfc = pd.DataFrame(self.counts, columns=['metric', 'key', 'value', 'count'])
count = {}
mean = {}
if self.means:
res = dfm.groupby(by=['key']).agg(['mean', 'std', 'count', 'median', 'max', 'min'])
mean = res['value'].to_dict()
if self.counts:
res = dfc.groupby(by=['key', 'value']).agg(['mean', 'std', 'count', 'median', 'max', 'min'])
for k,v in res['count'].to_dict().items():
if k not in count:
count[k] = {}
for tup, times in v.items():
subkey, subcount = tup
if subkey not in count[k]:
count[k][subkey] = {}
count[k][subkey][subcount] = times
return {'count': count, 'mean': mean}
class defaultStats(Stats):
def trial(self, env):
c = Counter()
c.update(a.__class__.__name__ for a in env.network_agents)
c2 = Counter()
c2.update(a['id'] for a in env.network_agents)
return {
'network ': {
'n_nodes': env.G.number_of_nodes(),
'n_edges': env.G.number_of_edges(),
},
'agents': {
'model_count': dict(c),
'state_count': dict(c2),
}
}

View File

@@ -3,13 +3,15 @@ from queue import Empty
from heapq import heappush, heappop
import math
from .utils import logger
from mesa import Agent
from mesa import Agent as MesaAgent
INFINITY = float('inf')
class When:
def __init__(self, time):
if isinstance(time, When):
return time
self._time = time
def abs(self, time):
@@ -18,7 +20,7 @@ class When:
NEVER = When(INFINITY)
class Delta:
class Delta(When):
def __init__(self, delta):
self._delta = delta
@@ -39,7 +41,7 @@ class TimedActivation(BaseScheduler):
self._queue = []
self.next_time = 0
def add(self, agent: Agent):
def add(self, agent: MesaAgent):
if agent.unique_id not in self._agents:
heappush(self._queue, (self.time, agent.unique_id))
super().add(agent)
@@ -60,7 +62,8 @@ class TimedActivation(BaseScheduler):
(when, agent_id) = heappop(self._queue)
logger.debug(f'Stepping agent {agent_id}')
when = (self._agents[agent_id].step() or Delta(1)).abs(self.time)
returned = self._agents[agent_id].step()
when = (returned or Delta(1)).abs(self.time)
if when < self.time:
raise Exception("Cannot schedule an agent for a time in the past ({} < {})".format(when, self.time))

View File

@@ -1,5 +1,5 @@
import logging
import time
from time import time as current_time, strftime, gmtime, localtime
import os
from shutil import copyfile
@@ -13,13 +13,13 @@ logger = logging.getLogger('soil')
@contextmanager
def timer(name='task', pre="", function=logger.info, to_object=None):
start = time.time()
start = current_time()
function('{}Starting {} at {}.'.format(pre, name,
time.strftime("%X", time.gmtime(start))))
strftime("%X", gmtime(start))))
yield start
end = time.time()
end = current_time()
function('{}Finished {} at {} in {} seconds'.format(pre, name,
time.strftime("%X", time.gmtime(end)),
strftime("%X", gmtime(end)),
str(end-start)))
if to_object:
to_object.start = start
@@ -34,7 +34,7 @@ def safe_open(path, mode='r', backup=True, **kwargs):
os.makedirs(outdir)
if backup and 'w' in mode and os.path.exists(path):
creation = os.path.getctime(path)
stamp = time.strftime('%Y-%m-%d_%H.%M.%S', time.localtime(creation))
stamp = strftime('%Y-%m-%d_%H.%M.%S', localtime(creation))
backup_dir = os.path.join(outdir, 'backup')
if not os.path.exists(backup_dir):
@@ -45,11 +45,13 @@ def safe_open(path, mode='r', backup=True, **kwargs):
return open(path, mode=mode, **kwargs)
@contextmanager
def open_or_reuse(f, *args, **kwargs):
try:
return safe_open(f, *args, **kwargs)
with safe_open(f, *args, **kwargs) as f:
yield f
except (AttributeError, TypeError):
return f
yield f
def flatten_dict(d):
if not isinstance(d, dict):

View File

@@ -1,4 +1,4 @@
pytest
mesa>=0.8.9
pytest-profiling
scipy>=1.3
tornado

View File

@@ -0,0 +1,49 @@
---
version: '2'
general:
id: simple
group: tests
dir_path: "/tmp/"
num_trials: 3
max_time: 100
interval: 1
seed: "CompleteSeed!"
topologies:
default:
params:
generator: complete_graph
n: 10
agents:
default:
agent_class: CounterModel
state:
times: 1
network:
topology: 'default'
distribution:
- agent_class: CounterModel
weight: 0.4
state:
state_id: 0
- agent_class: AggregatedCounter
weight: 0.6
override:
- filter:
node_id: 0
state:
name: 'The first node'
- filter:
node_id: 1
state:
name: 'The second node'
environment:
fixed:
- name: 'Environment Agent 1'
agent_class: CounterModel
state:
times: 10
environment:
environment_class: Environment
params:
am_i_complete: true

32
tests/old_complete.yml Normal file
View File

@@ -0,0 +1,32 @@
---
name: simple
group: tests
dir_path: "/tmp/"
num_trials: 3
max_time: 100
interval: 1
seed: "CompleteSeed!"
network_params:
generator: complete_graph
n: 10
network_agents:
- agent_type: CounterModel
weight: 0.4
state:
state_id: 0
- agent_type: AggregatedCounter
weight: 0.6
environment_agents:
- agent_id: 'Environment Agent 1'
agent_type: CounterModel
state:
times: 10
environment_class: Environment
environment_params:
am_i_complete: true
agent_type: CounterModel
default_state:
times: 1
states:
- name: 'The first node'
- name: 'The second node'

View File

@@ -16,9 +16,7 @@ class TestMain(TestCase):
d.step()
with pytest.raises(agents.DeadAgent):
d.step()
def test_die_returns_infinity(self):
d = Dead(unique_id=0, model=environment.Environment())
assert d.step().abs(0) == stime.INFINITY

View File

@@ -50,6 +50,7 @@ class TestAnalysis(TestCase):
'states': [{'interval': 1}, {'interval': 2}],
'max_time': 30,
'num_trials': 1,
'history': True,
'environment_params': {
}
}

119
tests/test_config.py Normal file
View File

@@ -0,0 +1,119 @@
from unittest import TestCase
import os
from os.path import join
from soil import simulation, serialization, config, network, agents
ROOT = os.path.abspath(os.path.dirname(__file__))
EXAMPLES = join(ROOT, '..', 'examples')
FORCE_TESTS = os.environ.get('FORCE_TESTS', '')
class TestConfig(TestCase):
def test_conversion(self):
expected = serialization.load_file(join(ROOT, "complete_converted.yml"))[0]
old = serialization.load_file(join(ROOT, "old_complete.yml"))[0]
converted_defaults = config.convert_old(old, strict=False)
converted = converted_defaults.dict(skip_defaults=True)
def isequal(a, b):
if isinstance(a, dict):
for (k, v) in a.items():
if v:
isequal(a[k], b[k])
else:
assert not b.get(k, None)
return
assert a == b
isequal(converted, expected)
def test_topology_config(self):
netconfig = config.NetConfig(**{
'path': join(ROOT, 'test.gexf')
})
net = network.from_config(netconfig, dir_path=ROOT)
assert len(net.nodes) == 2
assert len(net.edges) == 1
def test_env_from_config(self):
"""
Simple configuration that tests that the graph is loaded, and that
network agents are initialized properly.
"""
config = {
'name': 'CounterAgent',
'network_params': {
'path': join(ROOT, 'test.gexf')
},
'agent_type': 'CounterModel',
# 'states': [{'times': 10}, {'times': 20}],
'max_time': 2,
'dry_run': True,
'num_trials': 1,
'environment_params': {
}
}
s = simulation.from_old_config(config)
env = s.get_env()
assert len(env.topologies['default'].nodes) == 2
assert len(env.topologies['default'].edges) == 1
assert len(env.agents) == 2
assert env.agents[0].topology == env.topologies['default']
def test_agents_from_config(self):
'''We test that the known complete configuration produces
the right agents in the right groups'''
cfg = serialization.load_file(join(ROOT, "complete_converted.yml"))[0]
s = simulation.from_config(cfg)
env = s.get_env()
assert len(env.topologies['default'].nodes) == 10
assert len(env.agents(group='network')) == 10
assert len(env.agents(group='environment')) == 1
assert sum(1 for a in env.agents(group='network', agent_type=agents.CounterModel)) == 4
assert sum(1 for a in env.agents(group='network', agent_type=agents.AggregatedCounter)) == 6
def make_example_test(path, cfg):
def wrapped(self):
root = os.getcwd()
print(path)
s = simulation.from_config(cfg)
# for s in simulation.all_from_config(path):
# iterations = s.config.max_time * s.config.num_trials
# if iterations > 1000:
# s.config.max_time = 100
# s.config.num_trials = 1
# if config.get('skip_test', False) and not FORCE_TESTS:
# self.skipTest('Example ignored.')
# envs = s.run_simulation(dry_run=True)
# assert envs
# for env in envs:
# assert env
# try:
# n = config['network_params']['n']
# assert len(list(env.network_agents)) == n
# assert env.now > 0 # It has run
# assert env.now <= config['max_time'] # But not further than allowed
# except KeyError:
# pass
return wrapped
def add_example_tests():
for config, path in serialization.load_files(
join(EXAMPLES, '*', '*.yml'),
join(EXAMPLES, '*.yml'),
):
p = make_example_test(path=path, cfg=config)
fname = os.path.basename(path)
p.__name__ = 'test_example_file_%s' % fname
p.__doc__ = '%s should be a valid configuration' % fname
setattr(TestConfig, p.__name__, p)
del p
add_example_tests()

View File

@@ -18,10 +18,10 @@ def make_example_test(path, config):
def wrapped(self):
root = os.getcwd()
for s in simulation.all_from_config(path):
iterations = s.max_time * s.num_trials
iterations = s.config.general.max_time * s.config.general.num_trials
if iterations > 1000:
s.max_time = 100
s.num_trials = 1
s.config.general.max_time = 100
s.config.general.num_trials = 1
if config.get('skip_test', False) and not FORCE_TESTS:
self.skipTest('Example ignored.')
envs = s.run_simulation(dry_run=True)

View File

@@ -2,14 +2,11 @@ import os
import io
import tempfile
import shutil
from time import time
from unittest import TestCase
from soil import exporters
from soil import simulation
from soil.stats import distribution
class Dummy(exporters.Exporter):
started = False
trials = 0
@@ -19,17 +16,17 @@ class Dummy(exporters.Exporter):
called_trial = 0
called_end = 0
def start(self):
def sim_start(self):
self.__class__.called_start += 1
self.__class__.started = True
def trial(self, env, stats):
def trial_end(self, env):
assert env
self.__class__.trials += 1
self.__class__.total_time += env.now
self.__class__.called_trial += 1
def end(self, stats):
def sim_end(self):
self.__class__.ended = True
self.__class__.called_end += 1
@@ -68,6 +65,7 @@ class Exporters(TestCase):
'agent_type': 'CounterModel',
'max_time': 2,
'num_trials': n_trials,
'dry_run': False,
'environment_params': {}
}
output = io.StringIO()
@@ -78,7 +76,7 @@ class Exporters(TestCase):
exporters.csv,
exporters.gexf,
],
stats=[distribution,],
dry_run=False,
outdir=tmpdir,
exporter_params={'copy_to': output})
result = output.getvalue()

128
tests/test_history.py Normal file
View File

@@ -0,0 +1,128 @@
from unittest import TestCase
import os
import io
import yaml
import copy
import pickle
import networkx as nx
from functools import partial
from os.path import join
from soil import (simulation, Environment, agents, serialization,
utils)
from soil.time import Delta
from tsih import NoHistory, History
ROOT = os.path.abspath(os.path.dirname(__file__))
EXAMPLES = join(ROOT, '..', 'examples')
class CustomAgent(agents.FSM):
@agents.default_state
@agents.state
def normal(self):
self.neighbors = self.count_agents(state_id='normal',
limit_neighbors=True)
@agents.state
def unreachable(self):
return
class TestHistory(TestCase):
def test_counter_agent_history(self):
"""
The evolution of the state should be recorded in the logging agent
"""
config = {
'name': 'CounterAgent',
'network_params': {
'path': join(ROOT, 'test.gexf')
},
'network_agents': [{
'agent_type': 'AggregatedCounter',
'weight': 1,
'state': {'state_id': 0}
}],
'max_time': 10,
'environment_params': {
}
}
s = simulation.from_config(config)
env = s.run_simulation(dry_run=True)[0]
for agent in env.network_agents:
last = 0
assert len(agent[None, None]) == 11
for step, total in sorted(agent['total', None]):
assert total == last + 2
last = total
def test_row_conversion(self):
env = Environment(history=True)
env['test'] = 'test_value'
res = list(env.history_to_tuples())
assert len(res) == len(env.environment_params)
env.schedule.time = 1
env['test'] = 'second_value'
res = list(env.history_to_tuples())
assert env['env', 0, 'test' ] == 'test_value'
assert env['env', 1, 'test' ] == 'second_value'
def test_nohistory(self):
'''
Make sure that no history(/sqlite) is used by default
'''
env = Environment(topology=nx.Graph(), network_agents=[])
assert isinstance(env._history, NoHistory)
def test_save_graph_history(self):
'''
The history_to_graph method should return a valid networkx graph.
The state of the agent should be encoded as intervals in the nx graph.
'''
G = nx.cycle_graph(5)
distribution = agents.calculate_distribution(None, agents.BaseAgent)
env = Environment(topology=G, network_agents=distribution, history=True)
env[0, 0, 'testvalue'] = 'start'
env[0, 10, 'testvalue'] = 'finish'
nG = env.history_to_graph()
values = nG.nodes[0]['attr_testvalue']
assert ('start', 0, 10) in values
assert ('finish', 10, None) in values
def test_save_graph_nohistory(self):
'''
The history_to_graph method should return a valid networkx graph.
When NoHistory is used, only the last known value is known
'''
G = nx.cycle_graph(5)
distribution = agents.calculate_distribution(None, agents.BaseAgent)
env = Environment(topology=G, network_agents=distribution, history=False)
env.get_agent(0)['testvalue'] = 'start'
env.schedule.time = 10
env.get_agent(0)['testvalue'] = 'finish'
nG = env.history_to_graph()
values = nG.nodes[0]['attr_testvalue']
assert ('start', 0, None) not in values
assert ('finish', 10, None) in values
def test_pickle_agent_environment(self):
env = Environment(name='Test', history=True)
a = agents.BaseAgent(model=env, unique_id=25)
a['key'] = 'test'
pickled = pickle.dumps(a)
recovered = pickle.loads(pickled)
assert recovered.env.name == 'Test'
assert list(recovered.env._history.to_tuples())
assert recovered['key', 0] == 'test'
assert recovered['key'] == 'test'

View File

@@ -3,21 +3,21 @@ from unittest import TestCase
import os
import io
import yaml
import copy
import pickle
import networkx as nx
from functools import partial
from os.path import join
from soil import (simulation, Environment, agents, serialization,
utils)
from soil import (simulation, Environment, agents, network, serialization,
utils, config)
from soil.time import Delta
ROOT = os.path.abspath(os.path.dirname(__file__))
EXAMPLES = join(ROOT, '..', 'examples')
class CustomAgent(agents.FSM):
class CustomAgent(agents.FSM, agents.NetworkAgent):
@agents.default_state
@agents.state
def normal(self):
@@ -39,7 +39,7 @@ class TestMain(TestCase):
'path': join(ROOT, 'test.gexf')
}
}
G = serialization.load_network(config['network_params'])
G = network.from_config(config['network_params'])
assert G
assert len(G) == 2
with self.assertRaises(AttributeError):
@@ -48,7 +48,7 @@ class TestMain(TestCase):
'path': join(ROOT, 'unknown.extension')
}
}
G = serialization.load_network(config['network_params'])
G = network.from_config(config['network_params'])
print(G)
def test_generate_barabasi(self):
@@ -56,16 +56,16 @@ class TestMain(TestCase):
If no path is given, a generator and network parameters
should be used to generate a network
"""
config = {
'network_params': {
cfg = {
'params': {
'generator': 'barabasi_albert_graph'
}
}
with self.assertRaises(TypeError):
G = serialization.load_network(config['network_params'])
config['network_params']['n'] = 100
config['network_params']['m'] = 10
G = serialization.load_network(config['network_params'])
with self.assertRaises(Exception):
G = network.from_config(cfg)
cfg['params']['n'] = 100
cfg['params']['m'] = 10
G = network.from_config(cfg)
assert len(G) == 100
def test_empty_simulation(self):
@@ -78,59 +78,68 @@ class TestMain(TestCase):
'environment_params': {
}
}
s = simulation.from_config(config)
s = simulation.from_old_config(config)
s.run_simulation(dry_run=True)
def test_counter_agent(self):
def test_network_agent(self):
"""
The initial states should be applied to the agent and the
agent should be able to update its state."""
config = {
'name': 'CounterAgent',
'network_params': {
'path': join(ROOT, 'test.gexf')
'generator': nx.complete_graph,
'n': 2,
},
'agent_type': 'CounterModel',
'states': [{'times': 10}, {'times': 20}],
'states': {
0: {'times': 10},
1: {'times': 20},
},
'max_time': 2,
'num_trials': 1,
'environment_params': {
}
}
s = simulation.from_config(config)
env = s.run_simulation(dry_run=True)[0]
assert env.get_agent(0)['times', 0] == 11
assert env.get_agent(0)['times', 1] == 12
assert env.get_agent(1)['times', 0] == 21
assert env.get_agent(1)['times', 1] == 22
s = simulation.from_old_config(config)
def test_counter_agent_history(self):
"""
The evolution of the state should be recorded in the logging agent
def test_counter_agent(self):
"""
The initial states should be applied to the agent and the
agent should be able to update its state."""
config = {
'name': 'CounterAgent',
'network_params': {
'path': join(ROOT, 'test.gexf')
'version': '2',
'general': {
'name': 'CounterAgent',
'max_time': 2,
'dry_run': True,
'num_trials': 1,
},
'network_agents': [{
'agent_type': 'AggregatedCounter',
'weight': 1,
'state': {'state_id': 0}
}],
'max_time': 10,
'environment_params': {
'topologies': {
'default': {
'path': join(ROOT, 'test.gexf')
}
},
'agents': {
'default': {
'agent_class': 'CounterModel',
},
'counters': {
'topology': 'default',
'fixed': [{'state': {'times': 10}}, {'state': {'times': 20}}],
}
}
}
s = simulation.from_config(config)
env = s.run_simulation(dry_run=True)[0]
for agent in env.network_agents:
last = 0
assert len(agent[None, None]) == 10
for step, total in sorted(agent['total', None]):
assert total == last + 2
last = total
env = s.get_env()
assert isinstance(env.agents[0], agents.CounterModel)
assert env.agents[0].topology == env.topologies['default']
assert env.agents[0]['times'] == 10
assert env.agents[0]['times'] == 10
env.step()
assert env.agents[0]['times'] == 11
assert env.agents[1]['times'] == 21
def test_custom_agent(self):
"""Allow for search of neighbors with a certain state_id"""
@@ -147,18 +156,18 @@ class TestMain(TestCase):
'environment_params': {
}
}
s = simulation.from_config(config)
s = simulation.from_old_config(config)
env = s.run_simulation(dry_run=True)[0]
assert env.get_agent(1).count_agents(state_id='normal') == 2
assert env.get_agent(1).count_agents(state_id='normal', limit_neighbors=True) == 1
assert env.get_agent(0).neighbors == 1
assert env.agents[1].count_agents(state_id='normal') == 2
assert env.agents[1].count_agents(state_id='normal', limit_neighbors=True) == 1
assert env.agents[0].neighbors == 1
def test_torvalds_example(self):
"""A complete example from a documentation should work."""
config = serialization.load_file(join(EXAMPLES, 'torvalds.yml'))[0]
config['network_params']['path'] = join(EXAMPLES,
config['network_params']['path'])
s = simulation.from_config(config)
s = simulation.from_old_config(config)
env = s.run_simulation(dry_run=True)[0]
for a in env.network_agents:
skill_level = a.state['skill_level']
@@ -177,19 +186,20 @@ class TestMain(TestCase):
def test_yaml(self):
"""
The YAML version of a newly created simulation
should be equivalent to the configuration file used
The YAML version of a newly created configuration should be equivalent
to the configuration file used.
Values not present in the original config file should have reasonable
defaults.
"""
with utils.timer('loading'):
config = serialization.load_file(join(EXAMPLES, 'complete.yml'))[0]
s = simulation.from_config(config)
s = simulation.from_old_config(config)
with utils.timer('serializing'):
serial = s.to_yaml()
with utils.timer('recovering'):
recovered = yaml.load(serial, Loader=yaml.SafeLoader)
with utils.timer('deleting'):
del recovered['topology']
assert config == recovered
for (k, v) in config.items():
assert recovered[k] == v
def test_configuration_changes(self):
"""
@@ -197,26 +207,13 @@ class TestMain(TestCase):
the simulation.
"""
config = serialization.load_file(join(EXAMPLES, 'complete.yml'))[0]
s = simulation.from_config(config)
s = simulation.from_old_config(config)
init_config = copy.copy(s.config)
s.run_simulation(dry_run=True)
nconfig = s.to_dict()
del nconfig['topology']
assert config == nconfig
def test_row_conversion(self):
env = Environment()
env['test'] = 'test_value'
res = list(env.history_to_tuples())
assert len(res) == len(env.environment_params)
env.schedule.time = 1
env['test'] = 'second_value'
res = list(env.history_to_tuples())
assert env['env', 0, 'test' ] == 'test_value'
assert env['env', 1, 'test' ] == 'second_value'
nconfig = s.config
# del nconfig['to
assert init_config == nconfig
def test_save_geometric(self):
"""
@@ -228,27 +225,15 @@ class TestMain(TestCase):
f = io.BytesIO()
env.dump_gexf(f)
def test_save_graph(self):
'''
The history_to_graph method should return a valid networkx graph.
The state of the agent should be encoded as intervals in the nx graph.
'''
G = nx.cycle_graph(5)
distribution = agents.calculate_distribution(None, agents.BaseAgent)
env = Environment(topology=G, network_agents=distribution)
env[0, 0, 'testvalue'] = 'start'
env[0, 10, 'testvalue'] = 'finish'
nG = env.history_to_graph()
values = nG.nodes[0]['attr_testvalue']
assert ('start', 0, 10) in values
assert ('finish', 10, None) in values
def test_serialize_class(self):
ser, name = serialization.serialize(agents.BaseAgent)
ser, name = serialization.serialize(agents.BaseAgent, known_modules=[])
assert name == 'soil.agents.BaseAgent'
assert ser == agents.BaseAgent
ser, name = serialization.serialize(agents.BaseAgent, known_modules=['soil', ])
assert name == 'BaseAgent'
assert ser == agents.BaseAgent
ser, name = serialization.serialize(CustomAgent)
assert name == 'test_main.CustomAgent'
assert ser == CustomAgent
@@ -302,31 +287,19 @@ class TestMain(TestCase):
assert converted[1]['agent_type'] == 'test_main.CustomAgent'
pickle.dumps(converted)
def test_pickle_agent_environment(self):
env = Environment(name='Test')
a = agents.BaseAgent(model=env, unique_id=25)
a['key'] = 'test'
pickled = pickle.dumps(a)
recovered = pickle.loads(pickled)
assert recovered.env.name == 'Test'
assert list(recovered.env._history.to_tuples())
assert recovered['key', 0] == 'test'
assert recovered['key'] == 'test'
def test_subgraph(self):
'''An agent should be able to subgraph the global topology'''
G = nx.Graph()
G.add_node(3)
G.add_edge(1, 2)
distro = agents.calculate_distribution(agent_type=agents.NetworkAgent)
env = Environment(name='Test', topology=G, network_agents=distro)
distro[0]['topology'] = 'default'
aconfig = config.AgentConfig(distribution=distro, topology='default')
env = Environment(name='Test', topologies={'default': G}, agents={'network': aconfig})
lst = list(env.network_agents)
a2 = env.get_agent(2)
a3 = env.get_agent(3)
a2 = env.find_one(node_id=2)
a3 = env.find_one(node_id=3)
assert len(a2.subgraph(limit_neighbors=True)) == 2
assert len(a3.subgraph(limit_neighbors=True)) == 1
assert len(a3.subgraph(limit_neighbors=True, center=False)) == 0
@@ -346,7 +319,7 @@ class TestMain(TestCase):
'num_trials': 50,
'environment_params': {}
}
s = simulation.from_config(config)
s = simulation.from_old_config(config)
runs = list(s.run_simulation(dry_run=True))
over = list(x.now for x in runs if x.now>2)
assert len(runs) == config['num_trials']

View File

@@ -1,34 +0,0 @@
from unittest import TestCase
from soil import simulation, stats
from soil.utils import unflatten_dict
class Stats(TestCase):
def test_distribution(self):
'''The distribution exporter should write the number of agents in each state'''
config = {
'name': 'exporter_sim',
'network_params': {
'generator': 'complete_graph',
'n': 4
},
'agent_type': 'CounterModel',
'max_time': 2,
'num_trials': 5,
'environment_params': {}
}
s = simulation.from_config(config)
for env in s.run_simulation(stats=[stats.distribution]):
pass
# stats_res = unflatten_dict(dict(env._history['stats', -1, None]))
allstats = s.get_stats()
for stat in allstats:
assert 'count' in stat
assert 'mean' in stat
if 'trial_id' in stat:
assert stat['mean']['neighbors'] == 3
assert stat['count']['total']['4'] == 4
else:
assert stat['count']['count']['neighbors']['3'] == 20
assert stat['mean']['min']['neighbors'] == stat['mean']['max']['neighbors']