1
0
mirror of https://github.com/gsi-upm/soil synced 2024-11-22 03:02:28 +00:00
This commit is contained in:
J. Fernando Sánchez 2022-05-10 16:29:06 +02:00
parent 6f7481769e
commit bbaed636a8
18 changed files with 887 additions and 524 deletions

View File

@ -39,6 +39,7 @@ As of this writing,
This is a non-exhaustive list of tasks to achieve compatibility: This is a non-exhaustive list of tasks to achieve compatibility:
* Environments.agents and mesa.Agent.agents are not the same. env is a property, and it only takes into account network and environment agents. Might rename environment_agents to other_agents or sth like that * Environments.agents and mesa.Agent.agents are not the same. env is a property, and it only takes into account network and environment agents. Might rename environment_agents to other_agents or sth like that
- [ ] Integrate `soil.Simulation` with mesa's runners: - [ ] Integrate `soil.Simulation` with mesa's runners:
- [ ] `soil.Simulation` could mimic/become a `mesa.batchrunner` - [ ] `soil.Simulation` could mimic/become a `mesa.batchrunner`
- [ ] Integrate `soil.Environment` with `mesa.Model`: - [ ] Integrate `soil.Environment` with `mesa.Model`:

View File

@ -88,9 +88,18 @@ For example, the following configuration is equivalent to :code:`nx.complete_gra
Environment Environment
============ ============
The environment is the place where the shared state of the simulation is stored. The environment is the place where the shared state of the simulation is stored.
For instance, the probability of disease outbreak. That means both global parameters, such as the probability of disease outbreak.
The configuration file may specify the initial value of the environment parameters: But it also means other data, such as a map, or a network topology that connects multiple agents.
As a result, it is also typical to add custom functions in an environment that help agents interact with each other and with the state of the simulation.
Last but not least, an environment controls when and how its agents will be executed.
By default, soil environments incorporate a ``soil.time.TimedActivation`` model for agent execution (more on this on the following section).
Soil environments are very similar, and often interchangeable with, mesa models (``mesa.Model``).
A configuration may specify the initial value of the environment parameters:
.. code:: yaml .. code:: yaml
@ -98,23 +107,33 @@ The configuration file may specify the initial value of the environment paramete
daily_probability_of_earthquake: 0.001 daily_probability_of_earthquake: 0.001
number_of_earthquakes: 0 number_of_earthquakes: 0
All agents have access to the environment parameters. All agents have access to the environment (and its parameters).
In some scenarios, it is useful to have a custom environment, to provide additional methods or to control the way agents update environment state. In some scenarios, it is useful to have a custom environment, to provide additional methods or to control the way agents update environment state.
For example, if our agents play the lottery, the environment could provide a method to decide whether the agent wins, instead of leaving it to the agent. For example, if our agents play the lottery, the environment could provide a method to decide whether the agent wins, instead of leaving it to the agent.
Agents Agents
====== ======
Agents are a way of modelling behavior. Agents are a way of modelling behavior.
Agents can be characterized with two variables: agent type (``agent_type``) and state. Agents can be characterized with two variables: agent type (``agent_type``) and state.
Only one agent is executed at a time (generally, every ``interval`` seconds), and it has access to its state and the environment parameters. The agent type is a ``soil.Agent`` class, which contains the code that encapsulates the behavior of the agent.
The state is a set of variables, which may change during the simulation, and that the code may use to control the behavior.
All agents provide a ``step`` method either explicitly or implicitly (by inheriting it from a superclass), which controls how the agent will behave in each step of the simulation.
When and how agent steps are executed in a simulation depends entirely on the ``environment``.
Most environments will internally use a scheduler (``mesa.time.BaseScheduler``), which controls the activation of agents.
In soil, we generally used the ``soil.time.TimedActivation`` scheduler, which allows agents to specify when their next activation will happen, defaulting to a
When an agent's step is executed (generally, every ``interval`` seconds), the agent has access to its state and the environment.
Through the environment, it can access the network topology and the state of other agents. Through the environment, it can access the network topology and the state of other agents.
There are three three types of agents according to how they are added to the simulation: network agents and environment agent. There are two types of agents according to how they are added to the simulation: network agents and environment agent.
Network Agents Network Agents
############## ##############
Network agents are attached to a node in the topology. Network agents are attached to a node in the topology.
The configuration file allows you to specify how agents will be mapped to topology nodes. The configuration file allows you to specify how agents will be mapped to topology nodes.
@ -125,7 +144,9 @@ Hence, every node in the network will be associated to an agent of that type.
agent_type: SISaModel agent_type: SISaModel
It is also possible to add more than one type of agent to the simulation, and to control the ratio of each type (using the ``weight`` property). It is also possible to add more than one type of agent to the simulation.
To control the ratio of each type (using the ``weight`` property).
For instance, with following configuration, it is five times more likely for a node to be assigned a CounterModel type than a SISaModel type. For instance, with following configuration, it is five times more likely for a node to be assigned a CounterModel type than a SISaModel type.
.. code:: yaml .. code:: yaml

View File

@ -1,27 +1,38 @@
--- ---
name: simple general:
group: tests name: simple
dir_path: "/tmp/" group: tests
num_trials: 3 dir_path: "/tmp/"
max_time: 100 num_trials: 3
interval: 1 max_time: 100
seed: "CompleteSeed!" interval: 1
network_params: seed: "CompleteSeed!"
network:
group:
network
params:
generator: complete_graph generator: complete_graph
n: 10 n: 10
network_agents: environment:
- agent_type: CounterModel environment_class: Environment
params:
am_i_complete: true
agents:
default:
agent_class: CounterModel
state:
times: 1
environment:
fixed:
- agent_id: 'Environment Agent 1'
agent_class: CounterModel
state:
times: 10
network:
distribution:
- agent_class: CounterModel
weight: 1 weight: 1
state: state:
state_id: 0 state_id: 0
- agent_type: AggregatedCounter - agent_class: AggregatedCounter
weight: 0.2 weight: 0.2
environment_agents: []
environment_class: Environment
environment_params:
am_i_complete: true
default_state:
incidents: 0
states:
- name: 'The first node'
- name: 'The second node'

View File

@ -1,6 +1,5 @@
--- ---
default_state: {} default_state: {}
load_module: newsspread
environment_agents: [] environment_agents: []
environment_params: environment_params:
prob_neighbor_spread: 0.0 prob_neighbor_spread: 0.0
@ -9,11 +8,11 @@ interval: 1
max_time: 300 max_time: 300
name: Sim_all_dumb name: Sim_all_dumb
network_agents: network_agents:
- agent_type: DumbViewer - agent_type: newsspread.DumbViewer
state: state:
has_tv: false has_tv: false
weight: 1 weight: 1
- agent_type: DumbViewer - agent_type: newsspread.DumbViewer
state: state:
has_tv: true has_tv: true
weight: 1 weight: 1
@ -24,7 +23,6 @@ network_params:
num_trials: 50 num_trials: 50
--- ---
default_state: {} default_state: {}
load_module: newsspread
environment_agents: [] environment_agents: []
environment_params: environment_params:
prob_neighbor_spread: 0.0 prob_neighbor_spread: 0.0
@ -33,19 +31,19 @@ interval: 1
max_time: 300 max_time: 300
name: Sim_half_herd name: Sim_half_herd
network_agents: network_agents:
- agent_type: DumbViewer - agent_type: newsspread.DumbViewer
state: state:
has_tv: false has_tv: false
weight: 1 weight: 1
- agent_type: DumbViewer - agent_type: newsspread.DumbViewer
state: state:
has_tv: true has_tv: true
weight: 1 weight: 1
- agent_type: HerdViewer - agent_type: newsspread.HerdViewer
state: state:
has_tv: false has_tv: false
weight: 1 weight: 1
- agent_type: HerdViewer - agent_type: newsspread.HerdViewer
state: state:
has_tv: true has_tv: true
weight: 1 weight: 1
@ -56,7 +54,6 @@ network_params:
num_trials: 50 num_trials: 50
--- ---
default_state: {} default_state: {}
load_module: newsspread
environment_agents: [] environment_agents: []
environment_params: environment_params:
prob_neighbor_spread: 0.0 prob_neighbor_spread: 0.0
@ -65,12 +62,12 @@ interval: 1
max_time: 300 max_time: 300
name: Sim_all_herd name: Sim_all_herd
network_agents: network_agents:
- agent_type: HerdViewer - agent_type: newsspread.HerdViewer
state: state:
has_tv: true has_tv: true
state_id: neutral state_id: neutral
weight: 1 weight: 1
- agent_type: HerdViewer - agent_type: newsspread.HerdViewer
state: state:
has_tv: true has_tv: true
state_id: neutral state_id: neutral
@ -82,7 +79,6 @@ network_params:
num_trials: 50 num_trials: 50
--- ---
default_state: {} default_state: {}
load_module: newsspread
environment_agents: [] environment_agents: []
environment_params: environment_params:
prob_neighbor_spread: 0.0 prob_neighbor_spread: 0.0
@ -92,12 +88,12 @@ interval: 1
max_time: 300 max_time: 300
name: Sim_wise_herd name: Sim_wise_herd
network_agents: network_agents:
- agent_type: HerdViewer - agent_type: newsspread.HerdViewer
state: state:
has_tv: true has_tv: true
state_id: neutral state_id: neutral
weight: 1 weight: 1
- agent_type: WiseViewer - agent_type: newsspread.WiseViewer
state: state:
has_tv: true has_tv: true
weight: 1 weight: 1
@ -108,7 +104,6 @@ network_params:
num_trials: 50 num_trials: 50
--- ---
default_state: {} default_state: {}
load_module: newsspread
environment_agents: [] environment_agents: []
environment_params: environment_params:
prob_neighbor_spread: 0.0 prob_neighbor_spread: 0.0
@ -118,12 +113,12 @@ interval: 1
max_time: 300 max_time: 300
name: Sim_all_wise name: Sim_all_wise
network_agents: network_agents:
- agent_type: WiseViewer - agent_type: newsspread.WiseViewer
state: state:
has_tv: true has_tv: true
state_id: neutral state_id: neutral
weight: 1 weight: 1
- agent_type: WiseViewer - agent_type: newsspread.WiseViewer
state: state:
has_tv: true has_tv: true
weight: 1 weight: 1

View File

@ -1,5 +1,4 @@
--- ---
load_module: rabbit_agents
name: rabbits_example name: rabbits_example
max_time: 100 max_time: 100
interval: 1 interval: 1

View File

@ -1,5 +1,4 @@
name: TerroristNetworkModel_sim name: TerroristNetworkModel_sim
load_module: TerroristNetworkModel
max_time: 150 max_time: 150
num_trials: 1 num_trials: 1
network_params: network_params:
@ -9,19 +8,19 @@ network_params:
# theta: 20 # theta: 20
n: 100 n: 100
network_agents: network_agents:
- agent_type: TerroristNetworkModel - agent_type: TerroristNetworkModel.TerroristNetworkModel
weight: 0.8 weight: 0.8
state: state:
id: civilian # Civilians id: civilian # Civilians
- agent_type: TerroristNetworkModel - agent_type: TerroristNetworkModel.TerroristNetworkModel
weight: 0.1 weight: 0.1
state: state:
id: leader # Leaders id: leader # Leaders
- agent_type: TrainingAreaModel - agent_type: TerroristNetworkModel.TrainingAreaModel
weight: 0.05 weight: 0.05
state: state:
id: terrorist # Terrorism id: terrorist # Terrorism
- agent_type: HavenModel - agent_type: TerroristNetworkModel.HavenModel
weight: 0.05 weight: 0.05
state: state:
id: civilian # Civilian id: civilian # Civilian

View File

@ -7,3 +7,4 @@ SALib>=1.3
Jinja2 Jinja2
Mesa>=0.8.9 Mesa>=0.8.9
tsih>=0.1.6 tsih>=0.1.6
pydantic>=1.9

View File

@ -1,251 +1,183 @@
from __future__ import annotations
from pydantic import BaseModel, ValidationError, validator, root_validator
import yaml import yaml
import os import os
import sys import sys
import networkx as nx
import collections.abc
from . import serialization, utils, basestring, agents from typing import Any, Callable, Dict, List, Optional, Union, Type
from pydantic import BaseModel, Extra
class Config(collections.abc.Mapping): class General(BaseModel):
""" id: str = 'Unnamed Simulation'
group: str = None
dir_path: str = None
num_trials: int = 1
max_time: float = 100
interval: float = 1
seed: str = ""
1) agent type can be specified by name or by class. @staticmethod
2) instead of just one type, a network agents distribution can be used. def default():
The distribution specifies the weight (or probability) of each return General()
agent type in the topology. This is an example distribution: ::
[
{'agent_type': 'agent_type_1',
'weight': 0.2,
'state': {
'id': 0
}
},
{'agent_type': 'agent_type_2',
'weight': 0.8,
'state': {
'id': 1
}
}
]
In this example, 20% of the nodes will be marked as type
'agent_type_1'.
3) if no initial state is given, each node's state will be set
to `{'id': 0}`.
Parameters
---------
name : str, optional
name of the Simulation
group : str, optional
a group name can be used to link simulations
topology (optional): networkx.Graph instance or Node-Link topology as a dict or string (will be loaded with `json_graph.node_link_graph(topology`).
network_params : dict
parameters used to create a topology with networkx, if no topology is given
network_agents : dict
definition of agents to populate the topology with
agent_type : NetworkAgent subclass, optional
Default type of NetworkAgent to use for nodes not specified in network_agents
states : list, optional
List of initial states corresponding to the nodes in the topology. Basic form is a list of integers
whose value indicates the state
dir_path: str, optional
Directory path to load simulation assets (files, modules...)
seed : str, optional
Seed to use for the random generator
num_trials : int, optional
Number of independent simulation runs
max_time : int, optional
Maximum step/time for each simulation
environment_params : dict, optional
Dictionary of globally-shared environmental parameters
environment_agents: dict, optional
Similar to network_agents. Distribution of Agents that control the environment
environment_class: soil.environment.Environment subclass, optional
Class for the environment. It defailts to soil.environment.Environment
"""
__slots__ = 'name', 'agent_type', 'group', 'network_agents', 'environment_agents', 'states', 'default_state', 'interval', 'network_params', 'seed', 'num_trials', 'max_time', 'topology', 'schedule', 'initial_time', 'environment_params', 'environment_class', 'dir_path', '_added_to_path'
def __init__(self, name=None,
group=None,
agent_type='BaseAgent',
network_agents=None,
environment_agents=None,
states=None,
default_state=None,
interval=1,
network_params=None,
seed=None,
num_trials=1,
max_time=None,
topology=None,
schedule=None,
initial_time=0,
environment_params={},
environment_class='soil.Environment',
dir_path=None):
self.network_params = network_params
self.name = name or 'Unnamed'
self.seed = str(seed or name)
self.group = group or ''
self.num_trials = num_trials
self.max_time = max_time
self.default_state = default_state or {}
self.dir_path = dir_path or os.getcwd()
self.interval = interval
self._added_to_path = list(x for x in [os.getcwd(), self.dir_path] if x not in sys.path)
sys.path += self._added_to_path
self.topology = topology
self.schedule = schedule
self.initial_time = initial_time
self.environment_class = environment_class # Could use TypeAlias in python >= 3.10
self.environment_params = dict(environment_params) nodeId = int
#TODO: Check agent distro vs fixed agents class Node(BaseModel):
self.environment_agents = environment_agents or [] id: nodeId
state: Dict[str, Any]
self.agent_type = agent_type
self.network_agents = network_agents or {}
self.states = states or {}
def validate(self): class Edge(BaseModel):
agents._validate_states(self.states, source: nodeId
self._topology) target: nodeId
value: float = 1
def restore_path(self):
for added in self._added_to_path:
sys.path.remove(added)
def to_yaml(self): class Topology(BaseModel):
return yaml.dump(self.to_dict()) nodes: List[Node]
directed: bool
links: List[Edge]
def dump_yaml(self, f=None, outdir=None):
if not f and not outdir:
raise ValueError('specify a file or an output directory')
if not f: class NetParams(BaseModel, extra=Extra.allow):
f = os.path.join(outdir, '{}.dumped.yml'.format(self.name)) generator: Union[Callable, str]
n: int
with utils.open_or_reuse(f, 'w') as f:
f.write(self.to_yaml())
def to_yaml(self): class NetConfig(BaseModel):
return yaml.dump(self.to_dict()) group: str = 'network'
params: Optional[NetParams]
topology: Optional[Topology]
path: Optional[str]
# TODO: See note on getstate @staticmethod
def to_dict(self): def default():
return self.__getstate__() return NetConfig(topology=None, params=None)
def dump_yaml(self, f=None, outdir=None):
if not f and not outdir:
raise ValueError('specify a file or an output directory')
if not f:
f = os.path.join(outdir, '{}.dumped.yml'.format(self.name))
with utils.open_or_reuse(f, 'w') as f:
f.write(self.to_yaml())
def __getitem__(self, key):
return getattr(self, key)
def __iter__(self):
return (k for k in self.__slots__ if k[0] != '_')
def __len__(self):
return len(self.__slots__)
def dump_pickle(self, f=None, outdir=None):
if not outdir and not f:
raise ValueError('specify a file or an output directory')
if not f:
f = os.path.join(outdir,
'{}.simulation.pickle'.format(self.name))
with utils.open_or_reuse(f, 'wb') as f:
pickle.dump(self, f)
# TODO: remove this. A config should be sendable regardless. Non-pickable objects could be computed via properties and the like
# def __getstate__(self):
# state={}
# for k, v in self.__dict__.items():
# if k[0] != '_':
# state[k] = v
# state['topology'] = json_graph.node_link_data(self.topology)
# state['network_agents'] = agents.serialize_definition(self.network_agents,
# known_modules = [])
# state['environment_agents'] = agents.serialize_definition(self.environment_agents,
# known_modules = [])
# state['environment_class'] = serialization.serialize(self.environment_class,
# known_modules=['soil.environment'])[1] # func, name
# if state['load_module'] is None:
# del state['load_module']
# return state
# # TODO: remove, same as __getstate__
# def __setstate__(self, state):
# self.__dict__ = state
# self.load_module = getattr(self, 'load_module', None)
# if self.dir_path not in sys.path:
# sys.path += [self.dir_path, os.getcwd()]
# self.topology = json_graph.node_link_graph(state['topology'])
# self.network_agents = agents.calculate_distribution(agents._convert_agent_types(self.network_agents))
# self.environment_agents = agents._convert_agent_types(self.environment_agents,
# known_modules=[self.load_module])
# self.environment_class = serialization.deserialize(self.environment_class,
# known_modules=[self.load_module,
# 'soil.environment', ]) # func, name
class CalculatedConfig(Config):
def __init__(self, config):
"""
Returns a configuration object that replaces some "plain" attributes (e.g., `environment_class` string) into
a Python object (`soil.environment.Environment` class).
"""
self._config = config
values = dict(config)
values['environment_class'] = self._environment_class()
values['environment_agents'] = self._environment_agents()
values['topology'] = self._topology()
values['network_agents'] = self._network_agents()
values['agent_type'] = serialization.deserialize(self.agent_type, known_modules=['soil.agents'])
@root_validator
def validate_all(cls, values):
if 'params' not in values and 'topology' not in values:
raise ValueError('You must specify either a topology or the parameters to generate a graph')
return values return values
def _topology(self):
topology = self._config.topology
if topology is None:
topology = serialization.load_network(self._config.network_params,
dir_path=self._config.dir_path)
elif isinstance(topology, basestring) or isinstance(topology, dict): class EnvConfig(BaseModel):
topology = json_graph.node_link_graph(topology) environment_class: Union[Type, str] = 'soil.Environment'
params: Dict[str, Any] = {}
schedule: Union[Type, str] = 'soil.time.TimedActivation'
return nx.Graph(topology) @staticmethod
def default():
return EnvConfig()
def _environment_class(self):
return serialization.deserialize(self._config.environment_class,
known_modules=['soil.environment', ]) or Environment
def _environment_agents(self): class SingleAgentConfig(BaseModel):
return agents._convert_agent_types(self._config.environment_agents) agent_class: Union[Type, str] = 'soil.Agent'
agent_id: Optional[Union[str, int]] = None
params: Dict[str, Any] = {}
state: Dict[str, Any] = {}
def _network_agents(self):
distro = agents.calculate_distribution(self._config.network_agents,
self._config.agent_type)
return agents._convert_agent_types(distro)
def _environment_class(self): class AgentDistro(SingleAgentConfig):
return serialization.deserialize(self._config.environment_class, weight: Optional[float] = None
known_modules=['soil.environment', ]) # func, name n: Optional[int] = None
@root_validator
def validate_all(cls, values):
if 'weight' in values and 'count' in values:
raise ValueError("You may either specify a weight in the distribution or an agent count")
return values
class AgentConfig(SingleAgentConfig):
n: Optional[int] = None
distribution: Optional[List[AgentDistro]] = None
fixed: Optional[List[SingleAgentConfig]] = None
@staticmethod
def default():
return AgentConfig()
class Config(BaseModel, extra=Extra.forbid):
general: General = General.default()
network: Optional[NetConfig] = None
environment: EnvConfig = EnvConfig.default()
agents: Dict[str, AgentConfig] = {}
def convert_old(old):
'''
Try to convert old style configs into the new format.
This is still a work in progress and might not work in many cases.
'''
new = {}
general = {}
for k in ['id',
'group',
'dir_path',
'num_trials',
'max_time',
'interval',
'seed']:
if k in old:
general[k] = old[k]
network = {'group': 'network'}
if 'network_params' in old and old['network_params']:
for (k, v) in old['network_params'].items():
if k == 'path':
network['path'] = v
else:
network.setdefault('params', {})[k] = v
if 'topology' in old:
network['topology'] = old['topology']
agents = {
'environment': {
'fixed': []
},
'network': {},
'default': {},
}
if 'agent_type' in old:
agents['default']['agent_class'] = old['agent_type']
if 'default_state' in old:
agents['default']['state'] = old['default_state']
def updated_agent(agent):
newagent = dict(agent)
newagent['agent_class'] = newagent['agent_type']
del newagent['agent_type']
return newagent
for agent in old.get('environment_agents', []):
agents['environment']['fixed'].append(updated_agent(agent))
for agent in old.get('network_agents', []):
agents['network'].setdefault('distribution', []).append(updated_agent(agent))
environment = {'params': {}}
if 'environment_class' in old:
environment['environment_class'] = old['environment_class']
for (k, v) in old.get('environment_params', {}).items():
environment['params'][k] = v
return Config(general=general,
network=network,
environment=environment,
agents=agents)

264
soil/config_old.py Normal file
View File

@ -0,0 +1,264 @@
from pydantic import BaseModel, ValidationError, validator
import yaml
import os
import sys
import networkx as nx
import collections.abc
from . import serialization, utils, basestring, agents
class Config(collections.abc.Mapping):
"""
1) agent type can be specified by name or by class.
2) instead of just one type, a network agents distribution can be used.
The distribution specifies the weight (or probability) of each
agent type in the topology. This is an example distribution: ::
[
{'agent_type': 'agent_type_1',
'weight': 0.2,
'state': {
'id': 0
}
},
{'agent_type': 'agent_type_2',
'weight': 0.8,
'state': {
'id': 1
}
}
]
In this example, 20% of the nodes will be marked as type
'agent_type_1'.
3) if no initial state is given, each node's state will be set
to `{'id': 0}`.
Parameters
---------
name : str, optional
name of the Simulation
group : str, optional
a group name can be used to link simulations
topology (optional): networkx.Graph instance or Node-Link topology as a dict or string (will be loaded with `json_graph.node_link_graph(topology`).
network_params : dict
parameters used to create a topology with networkx, if no topology is given
network_agents : dict
definition of agents to populate the topology with
agent_type : NetworkAgent subclass, optional
Default type of NetworkAgent to use for nodes not specified in network_agents
states : list, optional
List of initial states corresponding to the nodes in the topology. Basic form is a list of integers
whose value indicates the state
dir_path: str, optional
Directory path to load simulation assets (files, modules...)
seed : str, optional
Seed to use for the random generator
num_trials : int, optional
Number of independent simulation runs
max_time : int, optional
Maximum step/time for each simulation
environment_params : dict, optional
Dictionary of globally-shared environmental parameters
environment_agents: dict, optional
Similar to network_agents. Distribution of Agents that control the environment
environment_class: soil.environment.Environment subclass, optional
Class for the environment. It defailts to soil.environment.Environment
"""
__slots__ = 'name', 'agent_type', 'group', 'description', 'network_agents', 'environment_agents', 'states', 'default_state', 'interval', 'network_params', 'seed', 'num_trials', 'max_time', 'topology', 'schedule', 'initial_time', 'environment_params', 'environment_class', 'dir_path', '_added_to_path', 'visualization_params'
def __init__(self, name=None,
group=None,
agent_type='BaseAgent',
network_agents=None,
environment_agents=None,
states=None,
description=None,
default_state=None,
interval=1,
network_params=None,
seed=None,
num_trials=1,
max_time=None,
topology=None,
schedule=None,
initial_time=0,
environment_params={},
environment_class='soil.Environment',
dir_path=None,
visualization_params=None,
):
self.network_params = network_params
self.name = name or 'Unnamed'
self.description = description or 'No simulation description available'
self.seed = str(seed or name)
self.group = group or ''
self.num_trials = num_trials
self.max_time = max_time
self.default_state = default_state or {}
self.dir_path = dir_path or os.getcwd()
self.interval = interval
self.visualization_params = visualization_params or {}
self._added_to_path = list(x for x in [os.getcwd(), self.dir_path] if x not in sys.path)
sys.path += self._added_to_path
self.topology = topology
self.schedule = schedule
self.initial_time = initial_time
self.environment_class = environment_class
self.environment_params = dict(environment_params)
#TODO: Check agent distro vs fixed agents
self.environment_agents = environment_agents or []
self.agent_type = agent_type
self.network_agents = network_agents or {}
self.states = states or {}
def validate(self):
agents._validate_states(self.states,
self._topology)
def calculate(self):
return CalculatedConfig(self)
def restore_path(self):
for added in self._added_to_path:
sys.path.remove(added)
def to_yaml(self):
return yaml.dump(self.to_dict())
def dump_yaml(self, f=None, outdir=None):
if not f and not outdir:
raise ValueError('specify a file or an output directory')
if not f:
f = os.path.join(outdir, '{}.dumped.yml'.format(self.name))
with utils.open_or_reuse(f, 'w') as f:
f.write(self.to_yaml())
def to_yaml(self):
return yaml.dump(self.to_dict())
# TODO: See note on getstate
def to_dict(self):
return dict(self)
def __repr__(self):
return self.to_yaml()
def dump_yaml(self, f=None, outdir=None):
if not f and not outdir:
raise ValueError('specify a file or an output directory')
if not f:
f = os.path.join(outdir, '{}.dumped.yml'.format(self.name))
with utils.open_or_reuse(f, 'w') as f:
f.write(self.to_yaml())
def __getitem__(self, key):
return getattr(self, key)
def __iter__(self):
return (k for k in self.__slots__ if k[0] != '_')
def __len__(self):
return len(self.__slots__)
def dump_pickle(self, f=None, outdir=None):
if not outdir and not f:
raise ValueError('specify a file or an output directory')
if not f:
f = os.path.join(outdir,
'{}.simulation.pickle'.format(self.name))
with utils.open_or_reuse(f, 'wb') as f:
pickle.dump(self, f)
# TODO: remove this. A config should be sendable regardless. Non-pickable objects could be computed via properties and the like
# def __getstate__(self):
# state={}
# for k, v in self.__dict__.items():
# if k[0] != '_':
# state[k] = v
# state['topology'] = json_graph.node_link_data(self.topology)
# state['network_agents'] = agents.serialize_definition(self.network_agents,
# known_modules = [])
# state['environment_agents'] = agents.serialize_definition(self.environment_agents,
# known_modules = [])
# state['environment_class'] = serialization.serialize(self.environment_class,
# known_modules=['soil.environment'])[1] # func, name
# if state['load_module'] is None:
# del state['load_module']
# return state
# # TODO: remove, same as __getstate__
# def __setstate__(self, state):
# self.__dict__ = state
# self.load_module = getattr(self, 'load_module', None)
# if self.dir_path not in sys.path:
# sys.path += [self.dir_path, os.getcwd()]
# self.topology = json_graph.node_link_graph(state['topology'])
# self.network_agents = agents.calculate_distribution(agents._convert_agent_types(self.network_agents))
# self.environment_agents = agents._convert_agent_types(self.environment_agents,
# known_modules=[self.load_module])
# self.environment_class = serialization.deserialize(self.environment_class,
# known_modules=[self.load_module,
# 'soil.environment', ]) # func, name
class CalculatedConfig(Config):
def __init__(self, config):
"""
Returns a configuration object that replaces some "plain" attributes (e.g., `environment_class` string) into
a Python object (`soil.environment.Environment` class).
"""
self._config = config
values = dict(config)
values['environment_class'] = self._environment_class()
values['environment_agents'] = self._environment_agents()
values['topology'] = self._topology()
values['network_agents'] = self._network_agents()
values['agent_type'] = serialization.deserialize(self.agent_type, known_modules=['soil.agents'])
return values
def _topology(self):
topology = self._config.topology
if topology is None:
topology = serialization.load_network(self._config.network_params,
dir_path=self._config.dir_path)
elif isinstance(topology, basestring) or isinstance(topology, dict):
topology = json_graph.node_link_graph(topology)
return nx.Graph(topology)
def _environment_class(self):
return serialization.deserialize(self._config.environment_class,
known_modules=['soil.environment', ]) or Environment
def _environment_agents(self):
return agents._convert_agent_types(self._config.environment_agents)
def _network_agents(self):
distro = agents.calculate_distribution(self._config.network_agents,
self._config.agent_type)
return agents._convert_agent_types(distro)
def _environment_class(self):
return serialization.deserialize(self._config.environment_class,
known_modules=['soil.environment', ]) # func, name

View File

@ -16,13 +16,6 @@ from tsih import Record
from . import serialization, agents, analysis, utils, time, config from . import serialization, agents, analysis, utils, time, config
# These properties will be copied when pickling/unpickling the environment
_CONFIG_PROPS = [ 'name',
'states',
'default_state',
'interval',
]
class Environment(Model): class Environment(Model):
""" """
The environment is key in a simulation. It contains the network topology, The environment is key in a simulation. It contains the network topology,
@ -34,76 +27,62 @@ class Environment(Model):
:meth:`soil.environment.Environment.get` method. :meth:`soil.environment.Environment.get` method.
""" """
def __init__(self, name=None, def __init__(self,
network_agents=None, env_id,
environment_agents=None, seed='default',
states=None,
default_state=None,
interval=1,
network_params=None,
seed=None,
topology=None,
schedule=None, schedule=None,
initial_time=0, env_params=None,
environment_params=None,
dir_path=None, dir_path=None,
**kwargs): **kwargs):
super().__init__() super().__init__()
self.schedule = schedule
if schedule is None:
self.schedule = time.TimedActivation()
self.name = name or 'UnnamedEnvironment' self.seed = '{}_{}'.format(seed, env_id)
self.id = env_id
self.dir_path = dir_path or os.getcwd()
if schedule is None:
schedule = time.TimedActivation()
self.schedule = schedule
seed = seed or current_time() seed = seed or current_time()
random.seed(seed) random.seed(seed)
if isinstance(states, list): if isinstance(states, list):
states = dict(enumerate(states)) states = dict(enumerate(states))
self.states = deepcopy(states) if states else {} self.states = deepcopy(states) if states else {}
self.default_state = deepcopy(default_state) or {} self.default_state = deepcopy(default_state) or {}
if topology is None:
network_params = network_params or {}
topology = serialization.load_network(network_params,
dir_path=dir_path)
if not topology:
topology = nx.Graph()
self.G = nx.Graph(topology)
self.environment_params = environment_params or {} self.set_topology(topology=topology,
self.environment_params.update(kwargs) network_params=network_params)
self.agents = agents or {}
self.env_params = env_params or {}
self.env_params.update(kwargs)
self._env_agents = {}
self.interval = interval self.interval = interval
self['SEED'] = seed self['SEED'] = seed
if network_agents:
distro = agents.calculate_distribution(network_agents)
self.network_agents = agents._convert_agent_types(distro)
else:
self.network_agents = []
environment_agents = environment_agents or []
if environment_agents:
distro = agents.calculate_distribution(environment_agents)
environment_agents = agents._convert_agent_types(distro)
self.environment_agents = environment_agents
self.logger = utils.logger.getChild(self.name) self.logger = utils.logger.getChild(self.name)
@staticmethod @staticmethod
def from_config(conf: config.Config, trial_id, **kwargs) -> Environment: def from_config(conf: config.Config, trial_id, **kwargs) -> Environment:
'''Create an environment for a trial of the simulation''' '''Create an environment for a trial of the simulation'''
conf = conf
conf = config.Config(conf, **kwargs) if kwargs:
conf.seed = '{}_{}'.format(conf.seed, trial_id) conf = config.Config(**conf.dict(exclude_defaults=True), **kwargs)
conf.name = '{}_trial_{}'.format(conf.name, trial_id).replace('.', '-') seed = '{}_{}'.format(conf.general.seed, trial_id)
opts = conf.environment_params.copy() id = '{}_trial_{}'.format(conf.general.id, trial_id).replace('.', '-')
opts = conf.environment.params.copy()
opts.update(conf) opts.update(conf)
opts.update(kwargs) opts.update(kwargs)
env = serialization.deserialize(conf.environment_class)(**opts) env = serialization.deserialize(conf.environment.environment_class)(env_id=id, seed=seed, **opts)
return env return env
@property @property
@ -112,21 +91,30 @@ class Environment(Model):
return self.schedule.time return self.schedule.time
raise Exception('The environment has not been scheduled, so it has no sense of time') raise Exception('The environment has not been scheduled, so it has no sense of time')
def set_topology(self, topology, network_params=None, dir_path=None):
if topology is None:
network_params = network_params or {}
topology = serialization.load_network(network_params,
dir_path=dir_path or self.dir_path)
if not topology:
topology = nx.Graph()
self.G = nx.Graph(topology)
@property @property
def agents(self): def agents(self):
yield from self.environment_agents for agents in self.agents.values():
yield from self.network_agents yield from agents
@property @agents.setter
def environment_agents(self): def agents(self, agents):
for ref in self._env_agents.values(): self.agents = {}
yield ref
@environment_agents.setter for (k, v) in agents.items():
def environment_agents(self, environment_agents): self.agents[k] = agents.from_config(v)
self._environment_agents = environment_agents for agent in self.agents.get('network', []):
node = self.G.nodes[agent.unique_id]
self._env_agents = agents._definition_to_dict(definition=environment_agents) node['agent'] = agent
@property @property
def network_agents(self): def network_agents(self):
@ -135,12 +123,6 @@ class Environment(Model):
if 'agent' in node: if 'agent' in node:
yield node['agent'] yield node['agent']
@network_agents.setter
def network_agents(self, network_agents):
self._network_agents = network_agents
for ix in self.G.nodes():
self.init_agent(ix, agent_definitions=network_agents)
def init_agent(self, agent_id, agent_definitions): def init_agent(self, agent_id, agent_definitions):
node = self.G.nodes[agent_id] node = self.G.nodes[agent_id]
init = False init = False
@ -251,20 +233,20 @@ class Environment(Model):
value=value) value=value)
def __contains__(self, key): def __contains__(self, key):
return key in self.environment_params return key in self.env_params
def get(self, key, default=None): def get(self, key, default=None):
''' '''
Get the value of an environment attribute. Get the value of an environment attribute.
Return `default` if the value is not set. Return `default` if the value is not set.
''' '''
return self.environment_params.get(key, default) return self.env_params.get(key, default)
def __getitem__(self, key): def __getitem__(self, key):
return self.environment_params.get(key) return self.env_params.get(key)
def __setitem__(self, key, value): def __setitem__(self, key, value):
return self.environment_params.__setitem__(key, value) return self.env_params.__setitem__(key, value)
def get_agent(self, agent_id): def get_agent(self, agent_id):
return self.G.nodes[agent_id]['agent'] return self.G.nodes[agent_id]['agent']
@ -292,7 +274,7 @@ class Environment(Model):
yield from self._agent_to_tuples(agent, now) yield from self._agent_to_tuples(agent, now)
return return
for k, v in self.environment_params.items(): for k, v in self.env_params.items():
yield Record(dict_id='env', yield Record(dict_id='env',
t_step=now, t_step=now,
key=k, key=k,
@ -300,23 +282,5 @@ class Environment(Model):
for agent in self.agents: for agent in self.agents:
yield from self._agent_to_tuples(agent, now) yield from self._agent_to_tuples(agent, now)
def __getstate__(self):
state = {}
for prop in _CONFIG_PROPS:
state[prop] = self.__dict__[prop]
state['G'] = json_graph.node_link_data(self.G)
state['environment_agents'] = self._env_agents
state['schedule'] = self.schedule
return state
def __setstate__(self, state):
for prop in _CONFIG_PROPS:
self.__dict__[prop] = state[prop]
self._env_agents = state['environment_agents']
self.G = json_graph.node_link_graph(state['G'])
# self._env = None
self.schedule = state['schedule']
self._queue = []
SoilEnvironment = Environment SoilEnvironment = Environment

View File

@ -2,6 +2,8 @@ import os
import csv as csvlib import csv as csvlib
from time import time as current_time from time import time as current_time
from io import BytesIO from io import BytesIO
from sqlalchemy import create_engine
import matplotlib.pyplot as plt import matplotlib.pyplot as plt
import networkx as nx import networkx as nx
@ -48,8 +50,8 @@ class Exporter:
self.simulation = simulation self.simulation = simulation
outdir = outdir or os.path.join(os.getcwd(), 'soil_output') outdir = outdir or os.path.join(os.getcwd(), 'soil_output')
self.outdir = os.path.join(outdir, self.outdir = os.path.join(outdir,
simulation.config.group or '', simulation.config.general.group or '',
simulation.config.name) simulation.config.general.id)
self.dry_run = dry_run self.dry_run = dry_run
self.copy_to = copy_to self.copy_to = copy_to
@ -84,24 +86,33 @@ class Exporter:
class default(Exporter): class default(Exporter):
'''Default exporter. Writes sqlite results, as well as the simulation YAML''' '''Default exporter. Writes sqlite results, as well as the simulation YAML'''
def sim_start(self): # def sim_start(self):
if not self.dry_run: # if not self.dry_run:
logger.info('Dumping results to %s', self.outdir) # logger.info('Dumping results to %s', self.outdir)
self.simulation.dump_yaml(outdir=self.outdir) # self.simulation.dump_yaml(outdir=self.outdir)
else: # else:
logger.info('NOT dumping results') # logger.info('NOT dumping results')
def trial_start(self, env, stats): # def trial_start(self, env, stats):
if not self.dry_run: # if not self.dry_run:
with timer('Dumping simulation {} trial {}'.format(self.simulation.name, # with timer('Dumping simulation {} trial {}'.format(self.simulation.name,
env.name)): # env.name)):
with self.output('{}.sqlite'.format(env.name), mode='wb') as f: # engine = create_engine('sqlite:///{}.sqlite'.format(env.name), echo=False)
env.dump_sqlite(f)
def sim_end(self, stats): # dc = env.datacollector
with timer('Dumping simulation {}\'s stats'.format(self.simulation.name)): # tables = {'env': dc.get_model_vars_dataframe(),
with self.output('{}.sqlite'.format(self.simulation.name), mode='wb') as f: # 'agents': dc.get_agent_vars_dataframe(),
self.simulation.dump_sqlite(f) # 'agents': dc.get_agent_vars_dataframe()}
# for table in dc.tables:
# tables[table] = dc.get_table_dataframe(table)
# for (t, df) in tables.items():
# df.to_sql(t, con=engine)
# def sim_end(self, stats):
# with timer('Dumping simulation {}\'s stats'.format(self.simulation.name)):
# engine = create_engine('sqlite:///{}.sqlite'.format(self.simulation.name), echo=False)
# with self.output('{}.sqlite'.format(self.simulation.name), mode='wb') as f:
# self.simulation.dump_sqlite(f)

View File

@ -51,8 +51,6 @@ def load_network(network_params, dir_path=None):
return G return G
def load_file(infile): def load_file(infile):
folder = os.path.dirname(infile) folder = os.path.dirname(infile)
if folder not in sys.path: if folder not in sys.path:
@ -138,7 +136,9 @@ def load_config(config):
builtins = importlib.import_module('builtins') builtins = importlib.import_module('builtins')
def name(value, known_modules=[]): KNOWN_MODULES = ['soil', ]
def name(value, known_modules=KNOWN_MODULES):
'''Return a name that can be imported, to serialize/deserialize an object''' '''Return a name that can be imported, to serialize/deserialize an object'''
if value is None: if value is None:
return 'None' return 'None'
@ -167,7 +167,7 @@ def serializer(type_):
return lambda x: x return lambda x: x
def serialize(v, known_modules=[]): def serialize(v, known_modules=KNOWN_MODULES):
'''Get a text representation of an object.''' '''Get a text representation of an object.'''
tname = name(v, known_modules=known_modules) tname = name(v, known_modules=known_modules)
func = serializer(tname) func = serializer(tname)
@ -176,7 +176,7 @@ def serialize(v, known_modules=[]):
IS_CLASS = re.compile(r"<class '(.*)'>") IS_CLASS = re.compile(r"<class '(.*)'>")
def deserializer(type_, known_modules=[]): def deserializer(type_, known_modules=KNOWN_MODULES):
if type(type_) != str: # Already deserialized if type(type_) != str: # Already deserialized
return type_ return type_
if type_ == 'str': if type_ == 'str':
@ -194,10 +194,9 @@ def deserializer(type_, known_modules=[]):
return getattr(cls, 'deserialize', cls) return getattr(cls, 'deserialize', cls)
# Otherwise, see if we can find the module and the class # Otherwise, see if we can find the module and the class
modules = known_modules or []
options = [] options = []
for mod in modules: for mod in known_modules:
if mod: if mod:
options.append((mod, type_)) options.append((mod, type_))
@ -226,7 +225,7 @@ def deserialize(type_, value=None, **kwargs):
return des(value) return des(value)
def deserialize_all(names, *args, known_modules=['soil'], **kwargs): def deserialize_all(names, *args, known_modules=KNOWN_MODULES, **kwargs):
'''Return the list of deserialized objects''' '''Return the list of deserialized objects'''
objects = [] objects = []
for name in names: for name in names:

View File

@ -18,7 +18,7 @@ from .utils import logger
from .exporters import default from .exporters import default
from .stats import defaultStats from .stats import defaultStats
from .config import Config from .config import Config, convert_old
#TODO: change documentation for simulation #TODO: change documentation for simulation
@ -34,18 +34,21 @@ class Simulation:
def __init__(self, config=None, def __init__(self, config=None,
**kwargs): **kwargs):
if bool(config) == bool(kwargs):
raise ValueError("Specify either a configuration or the parameters to initialize a configuration")
if kwargs: if kwargs:
config = Config(**kwargs) cfg = {}
if config:
cfg.update(config.dict(include_defaults=False))
cfg.update(kwargs)
config = Config(**cfg)
if not config:
raise ValueError("You need to specify a simulation configuration")
self.config = config self.config = config
@property @property
def name(self) -> str: def name(self) -> str:
return self.config.name return self.config.general.id
def run_simulation(self, *args, **kwargs): def run_simulation(self, *args, **kwargs):
return self.run(*args, **kwargs) return self.run(*args, **kwargs)
@ -58,13 +61,13 @@ class Simulation:
if parallel and not os.environ.get('SENPY_DEBUG', None): if parallel and not os.environ.get('SENPY_DEBUG', None):
p = Pool() p = Pool()
func = partial(self.run_trial_exceptions, **kwargs) func = partial(self.run_trial_exceptions, **kwargs)
for i in p.imap_unordered(func, range(self.config.num_trials)): for i in p.imap_unordered(func, range(self.config.general.num_trials)):
if isinstance(i, Exception): if isinstance(i, Exception):
logger.error('Trial failed:\n\t%s', i.message) logger.error('Trial failed:\n\t%s', i.message)
continue continue
yield i yield i
else: else:
for i in range(self.config.num_trials): for i in range(self.config.general.num_trials):
yield self.run_trial(trial_id=i, yield self.run_trial(trial_id=i,
**kwargs) **kwargs)
@ -88,7 +91,7 @@ class Simulation:
known_modules=['soil.stats',], known_modules=['soil.stats',],
**stats_params) **stats_params)
with utils.timer('simulation {}'.format(self.config.name)): with utils.timer('simulation {}'.format(self.config.general.id)):
for stat in stats: for stat in stats:
stat.sim_start() stat.sim_start()
@ -157,11 +160,11 @@ class Simulation:
if log_level: if log_level:
logger.setLevel(log_level) logger.setLevel(log_level)
# Set-up trial environment and graph # Set-up trial environment and graph
until = until or self.config.max_time until = until or self.config.general.max_time
env = Environment.from_config(self.config, trial_id=trial_id) env = Environment.from_config(self.config, trial_id=trial_id)
# Set up agents on nodes # Set up agents on nodes
with utils.timer('Simulation {} trial {}'.format(self.config.name, trial_id)): with utils.timer('Simulation {} trial {}'.format(self.config.general.id, trial_id)):
env.run(until) env.run(until)
return env return env
@ -194,15 +197,22 @@ def from_config(conf_or_path):
sim = Simulation(**config) sim = Simulation(**config)
return sim return sim
def from_old_config(conf_or_path):
config = list(serialization.load_config(conf_or_path))
if len(config) > 1:
raise AttributeError('Provide only one configuration')
config = convert_old(config[0][0])
return Simulation(config)
def run_from_config(*configs, **kwargs): def run_from_config(*configs, **kwargs):
for config_def in configs: for config_def in configs:
# logger.info("Found {} config(s)".format(len(ls))) # logger.info("Found {} config(s)".format(len(ls)))
for config, path in serialization.load_config(config_def): for config, path in serialization.load_config(config_def):
name = config.get('name', 'unnamed') name = config.general.id
logger.info("Using config(s): {name}".format(name=name)) logger.info("Using config(s): {name}".format(name=name))
dir_path = config.pop('dir_path', os.path.dirname(path)) dir_path = config.general.dir_path or os.path.dirname(path)
sim = Simulation(dir_path=dir_path, sim = Simulation(dir_path=dir_path,
**config) **config)
sim.run_simulation(**kwargs) sim.run_simulation(**kwargs)

32
tests/old_complete.yml Normal file
View File

@ -0,0 +1,32 @@
---
name: simple
group: tests
dir_path: "/tmp/"
num_trials: 3
max_time: 100
interval: 1
seed: "CompleteSeed!"
network_params:
generator: complete_graph
n: 10
network_agents:
- agent_type: CounterModel
weight: 1
state:
state_id: 0
- agent_type: AggregatedCounter
weight: 0.2
environment_agents:
- agent_id: 'Environment Agent 1'
agent_type: CounterModel
state:
times: 10
environment_class: Environment
environment_params:
am_i_complete: true
agent_type: CounterModel
default_state:
times: 1
states:
- name: 'The first node'
- name: 'The second node'

62
tests/test_config.py Normal file
View File

@ -0,0 +1,62 @@
from unittest import TestCase
import os
from os.path import join
from soil import serialization, config
ROOT = os.path.abspath(os.path.dirname(__file__))
EXAMPLES = join(ROOT, '..', 'examples')
FORCE_TESTS = os.environ.get('FORCE_TESTS', '')
class TestConfig(TestCase):
def test_conversion(self):
new = serialization.load_file(join(EXAMPLES, "complete.yml"))[0]
old = serialization.load_file(join(ROOT, "old_complete.yml"))[0]
converted = config.convert_old(old).dict(skip_defaults=True)
for (k, v) in new.items():
assert v == converted[k]
def make_example_test(path, cfg):
def wrapped(self):
root = os.getcwd()
s = config.Config(**cfg)
import pdb;pdb.set_trace()
# for s in simulation.all_from_config(path):
# iterations = s.config.max_time * s.config.num_trials
# if iterations > 1000:
# s.config.max_time = 100
# s.config.num_trials = 1
# if config.get('skip_test', False) and not FORCE_TESTS:
# self.skipTest('Example ignored.')
# envs = s.run_simulation(dry_run=True)
# assert envs
# for env in envs:
# assert env
# try:
# n = config['network_params']['n']
# assert len(list(env.network_agents)) == n
# assert env.now > 0 # It has run
# assert env.now <= config['max_time'] # But not further than allowed
# except KeyError:
# pass
return wrapped
def add_example_tests():
for config, path in serialization.load_files(
join(EXAMPLES, '*', '*.yml'),
join(EXAMPLES, '*.yml'),
):
p = make_example_test(path=path, cfg=config)
fname = os.path.basename(path)
p.__name__ = 'test_example_file_%s' % fname
p.__doc__ = '%s should be a valid configuration' % fname
setattr(TestConfig, p.__name__, p)
del p
add_example_tests()

View File

@ -18,10 +18,10 @@ def make_example_test(path, config):
def wrapped(self): def wrapped(self):
root = os.getcwd() root = os.getcwd()
for s in simulation.all_from_config(path): for s in simulation.all_from_config(path):
iterations = s.max_time * s.num_trials iterations = s.config.max_time * s.config.num_trials
if iterations > 1000: if iterations > 1000:
s.max_time = 100 s.config.max_time = 100
s.num_trials = 1 s.config.num_trials = 1
if config.get('skip_test', False) and not FORCE_TESTS: if config.get('skip_test', False) and not FORCE_TESTS:
self.skipTest('Example ignored.') self.skipTest('Example ignored.')
envs = s.run_simulation(dry_run=True) envs = s.run_simulation(dry_run=True)

128
tests/test_history.py Normal file
View File

@ -0,0 +1,128 @@
from unittest import TestCase
import os
import io
import yaml
import copy
import pickle
import networkx as nx
from functools import partial
from os.path import join
from soil import (simulation, Environment, agents, serialization,
utils)
from soil.time import Delta
from tsih import NoHistory, History
ROOT = os.path.abspath(os.path.dirname(__file__))
EXAMPLES = join(ROOT, '..', 'examples')
class CustomAgent(agents.FSM):
@agents.default_state
@agents.state
def normal(self):
self.neighbors = self.count_agents(state_id='normal',
limit_neighbors=True)
@agents.state
def unreachable(self):
return
class TestHistory(TestCase):
def test_counter_agent_history(self):
"""
The evolution of the state should be recorded in the logging agent
"""
config = {
'name': 'CounterAgent',
'network_params': {
'path': join(ROOT, 'test.gexf')
},
'network_agents': [{
'agent_type': 'AggregatedCounter',
'weight': 1,
'state': {'state_id': 0}
}],
'max_time': 10,
'environment_params': {
}
}
s = simulation.from_config(config)
env = s.run_simulation(dry_run=True)[0]
for agent in env.network_agents:
last = 0
assert len(agent[None, None]) == 11
for step, total in sorted(agent['total', None]):
assert total == last + 2
last = total
def test_row_conversion(self):
env = Environment(history=True)
env['test'] = 'test_value'
res = list(env.history_to_tuples())
assert len(res) == len(env.environment_params)
env.schedule.time = 1
env['test'] = 'second_value'
res = list(env.history_to_tuples())
assert env['env', 0, 'test' ] == 'test_value'
assert env['env', 1, 'test' ] == 'second_value'
def test_nohistory(self):
'''
Make sure that no history(/sqlite) is used by default
'''
env = Environment(topology=nx.Graph(), network_agents=[])
assert isinstance(env._history, NoHistory)
def test_save_graph_history(self):
'''
The history_to_graph method should return a valid networkx graph.
The state of the agent should be encoded as intervals in the nx graph.
'''
G = nx.cycle_graph(5)
distribution = agents.calculate_distribution(None, agents.BaseAgent)
env = Environment(topology=G, network_agents=distribution, history=True)
env[0, 0, 'testvalue'] = 'start'
env[0, 10, 'testvalue'] = 'finish'
nG = env.history_to_graph()
values = nG.nodes[0]['attr_testvalue']
assert ('start', 0, 10) in values
assert ('finish', 10, None) in values
def test_save_graph_nohistory(self):
'''
The history_to_graph method should return a valid networkx graph.
When NoHistory is used, only the last known value is known
'''
G = nx.cycle_graph(5)
distribution = agents.calculate_distribution(None, agents.BaseAgent)
env = Environment(topology=G, network_agents=distribution, history=False)
env.get_agent(0)['testvalue'] = 'start'
env.schedule.time = 10
env.get_agent(0)['testvalue'] = 'finish'
nG = env.history_to_graph()
values = nG.nodes[0]['attr_testvalue']
assert ('start', 0, None) not in values
assert ('finish', 10, None) in values
def test_pickle_agent_environment(self):
env = Environment(name='Test', history=True)
a = agents.BaseAgent(model=env, unique_id=25)
a['key'] = 'test'
pickled = pickle.dumps(a)
recovered = pickle.loads(pickled)
assert recovered.env.name == 'Test'
assert list(recovered.env._history.to_tuples())
assert recovered['key', 0] == 'test'
assert recovered['key'] == 'test'

View File

@ -3,6 +3,7 @@ from unittest import TestCase
import os import os
import io import io
import yaml import yaml
import copy
import pickle import pickle
import networkx as nx import networkx as nx
from functools import partial from functools import partial
@ -11,8 +12,6 @@ from os.path import join
from soil import (simulation, Environment, agents, serialization, from soil import (simulation, Environment, agents, serialization,
utils) utils)
from soil.time import Delta from soil.time import Delta
from tsih import NoHistory, History
ROOT = os.path.abspath(os.path.dirname(__file__)) ROOT = os.path.abspath(os.path.dirname(__file__))
EXAMPLES = join(ROOT, '..', 'examples') EXAMPLES = join(ROOT, '..', 'examples')
@ -79,9 +78,31 @@ class TestMain(TestCase):
'environment_params': { 'environment_params': {
} }
} }
s = simulation.from_config(config) s = simulation.from_old_config(config)
s.run_simulation(dry_run=True) s.run_simulation(dry_run=True)
def test_network_agent(self):
"""
The initial states should be applied to the agent and the
agent should be able to update its state."""
config = {
'name': 'CounterAgent',
'network_params': {
'generator': nx.complete_graph,
'n': 2,
},
'agent_type': 'CounterModel',
'states': {
0: {'times': 10},
1: {'times': 20},
},
'max_time': 2,
'num_trials': 1,
'environment_params': {
}
}
s = simulation.from_old_config(config)
def test_counter_agent(self): def test_counter_agent(self):
""" """
The initial states should be applied to the agent and the The initial states should be applied to the agent and the
@ -98,41 +119,13 @@ class TestMain(TestCase):
'environment_params': { 'environment_params': {
} }
} }
s = simulation.from_config(config) s = simulation.from_old_config(config)
env = s.run_simulation(dry_run=True)[0] env = s.run_simulation(dry_run=True)[0]
assert env.get_agent(0)['times', 0] == 11 assert env.get_agent(0)['times', 0] == 11
assert env.get_agent(0)['times', 1] == 12 assert env.get_agent(0)['times', 1] == 12
assert env.get_agent(1)['times', 0] == 21 assert env.get_agent(1)['times', 0] == 21
assert env.get_agent(1)['times', 1] == 22 assert env.get_agent(1)['times', 1] == 22
def test_counter_agent_history(self):
"""
The evolution of the state should be recorded in the logging agent
"""
config = {
'name': 'CounterAgent',
'network_params': {
'path': join(ROOT, 'test.gexf')
},
'network_agents': [{
'agent_type': 'AggregatedCounter',
'weight': 1,
'state': {'state_id': 0}
}],
'max_time': 10,
'environment_params': {
}
}
s = simulation.from_config(config)
env = s.run_simulation(dry_run=True)[0]
for agent in env.network_agents:
last = 0
assert len(agent[None, None]) == 10
for step, total in sorted(agent['total', None]):
assert total == last + 2
last = total
def test_custom_agent(self): def test_custom_agent(self):
"""Allow for search of neighbors with a certain state_id""" """Allow for search of neighbors with a certain state_id"""
config = { config = {
@ -148,7 +141,7 @@ class TestMain(TestCase):
'environment_params': { 'environment_params': {
} }
} }
s = simulation.from_config(config) s = simulation.from_old_config(config)
env = s.run_simulation(dry_run=True)[0] env = s.run_simulation(dry_run=True)[0]
assert env.get_agent(1).count_agents(state_id='normal') == 2 assert env.get_agent(1).count_agents(state_id='normal') == 2
assert env.get_agent(1).count_agents(state_id='normal', limit_neighbors=True) == 1 assert env.get_agent(1).count_agents(state_id='normal', limit_neighbors=True) == 1
@ -159,7 +152,7 @@ class TestMain(TestCase):
config = serialization.load_file(join(EXAMPLES, 'torvalds.yml'))[0] config = serialization.load_file(join(EXAMPLES, 'torvalds.yml'))[0]
config['network_params']['path'] = join(EXAMPLES, config['network_params']['path'] = join(EXAMPLES,
config['network_params']['path']) config['network_params']['path'])
s = simulation.from_config(config) s = simulation.from_old_config(config)
env = s.run_simulation(dry_run=True)[0] env = s.run_simulation(dry_run=True)[0]
for a in env.network_agents: for a in env.network_agents:
skill_level = a.state['skill_level'] skill_level = a.state['skill_level']
@ -178,19 +171,23 @@ class TestMain(TestCase):
def test_yaml(self): def test_yaml(self):
""" """
The YAML version of a newly created simulation The YAML version of a newly created configuration should be equivalent
should be equivalent to the configuration file used to the configuration file used.
Values not present in the original config file should have reasonable
defaults.
""" """
with utils.timer('loading'): with utils.timer('loading'):
config = serialization.load_file(join(EXAMPLES, 'complete.yml'))[0] config = serialization.load_file(join(EXAMPLES, 'complete.yml'))[0]
s = simulation.from_config(config) s = simulation.from_old_config(config)
with utils.timer('serializing'): with utils.timer('serializing'):
serial = s.to_yaml() serial = s.config.to_yaml()
with utils.timer('recovering'): with utils.timer('recovering'):
recovered = yaml.load(serial, Loader=yaml.SafeLoader) recovered = yaml.load(serial, Loader=yaml.SafeLoader)
with utils.timer('deleting'): with utils.timer('deleting'):
del recovered['topology'] del recovered['topology']
assert config == recovered for (k, v) in config.items():
assert recovered[k] == v
# assert config == recovered
def test_configuration_changes(self): def test_configuration_changes(self):
""" """
@ -198,26 +195,13 @@ class TestMain(TestCase):
the simulation. the simulation.
""" """
config = serialization.load_file(join(EXAMPLES, 'complete.yml'))[0] config = serialization.load_file(join(EXAMPLES, 'complete.yml'))[0]
s = simulation.from_config(config) s = simulation.from_old_config(config)
init_config = copy.copy(s.config)
s.run_simulation(dry_run=True) s.run_simulation(dry_run=True)
nconfig = s.to_dict() nconfig = s.config
del nconfig['topology'] # del nconfig['to
assert config == nconfig assert init_config == nconfig
def test_row_conversion(self):
env = Environment(history=True)
env['test'] = 'test_value'
res = list(env.history_to_tuples())
assert len(res) == len(env.environment_params)
env.schedule.time = 1
env['test'] = 'second_value'
res = list(env.history_to_tuples())
assert env['env', 0, 'test' ] == 'test_value'
assert env['env', 1, 'test' ] == 'second_value'
def test_save_geometric(self): def test_save_geometric(self):
""" """
@ -229,51 +213,15 @@ class TestMain(TestCase):
f = io.BytesIO() f = io.BytesIO()
env.dump_gexf(f) env.dump_gexf(f)
def test_nohistory(self):
'''
Make sure that no history(/sqlite) is used by default
'''
env = Environment(topology=nx.Graph(), network_agents=[])
assert isinstance(env._history, NoHistory)
def test_save_graph_history(self):
'''
The history_to_graph method should return a valid networkx graph.
The state of the agent should be encoded as intervals in the nx graph.
'''
G = nx.cycle_graph(5)
distribution = agents.calculate_distribution(None, agents.BaseAgent)
env = Environment(topology=G, network_agents=distribution, history=True)
env[0, 0, 'testvalue'] = 'start'
env[0, 10, 'testvalue'] = 'finish'
nG = env.history_to_graph()
values = nG.nodes[0]['attr_testvalue']
assert ('start', 0, 10) in values
assert ('finish', 10, None) in values
def test_save_graph_nohistory(self):
'''
The history_to_graph method should return a valid networkx graph.
When NoHistory is used, only the last known value is known
'''
G = nx.cycle_graph(5)
distribution = agents.calculate_distribution(None, agents.BaseAgent)
env = Environment(topology=G, network_agents=distribution, history=False)
env.get_agent(0)['testvalue'] = 'start'
env.schedule.time = 10
env.get_agent(0)['testvalue'] = 'finish'
nG = env.history_to_graph()
values = nG.nodes[0]['attr_testvalue']
assert ('start', 0, None) not in values
assert ('finish', 10, None) in values
def test_serialize_class(self): def test_serialize_class(self):
ser, name = serialization.serialize(agents.BaseAgent) ser, name = serialization.serialize(agents.BaseAgent, known_modules=[])
assert name == 'soil.agents.BaseAgent' assert name == 'soil.agents.BaseAgent'
assert ser == agents.BaseAgent assert ser == agents.BaseAgent
ser, name = serialization.serialize(agents.BaseAgent, known_modules=['soil', ])
assert name == 'BaseAgent'
assert ser == agents.BaseAgent
ser, name = serialization.serialize(CustomAgent) ser, name = serialization.serialize(CustomAgent)
assert name == 'test_main.CustomAgent' assert name == 'test_main.CustomAgent'
assert ser == CustomAgent assert ser == CustomAgent
@ -327,20 +275,6 @@ class TestMain(TestCase):
assert converted[1]['agent_type'] == 'test_main.CustomAgent' assert converted[1]['agent_type'] == 'test_main.CustomAgent'
pickle.dumps(converted) pickle.dumps(converted)
def test_pickle_agent_environment(self):
env = Environment(name='Test', history=True)
a = agents.BaseAgent(model=env, unique_id=25)
a['key'] = 'test'
pickled = pickle.dumps(a)
recovered = pickle.loads(pickled)
assert recovered.env.name == 'Test'
assert list(recovered.env._history.to_tuples())
assert recovered['key', 0] == 'test'
assert recovered['key'] == 'test'
def test_subgraph(self): def test_subgraph(self):
'''An agent should be able to subgraph the global topology''' '''An agent should be able to subgraph the global topology'''
G = nx.Graph() G = nx.Graph()
@ -371,7 +305,7 @@ class TestMain(TestCase):
'num_trials': 50, 'num_trials': 50,
'environment_params': {} 'environment_params': {}
} }
s = simulation.from_config(config) s = simulation.from_old_config(config)
runs = list(s.run_simulation(dry_run=True)) runs = list(s.run_simulation(dry_run=True))
over = list(x.now for x in runs if x.now>2) over = list(x.now for x in runs if x.now>2)
assert len(runs) == config['num_trials'] assert len(runs) == config['num_trials']