1
0
mirror of https://github.com/gsi-upm/soil synced 2024-11-24 20:02:28 +00:00

WIP: all tests pass

Documentation needs some improvement

The API has been simplified to only allow for ONE topology per
NetworkEnvironment.
This covers the main use case, and simplifies the code.
This commit is contained in:
J. Fernando Sánchez 2022-10-16 17:54:03 +02:00
parent cd62c23cb9
commit d9947c2c52
34 changed files with 693 additions and 736 deletions

View File

@ -5,14 +5,20 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
## [0.3 UNRELEASED] ## [0.3 UNRELEASED]
### Added ### Added
* Simple debugging capabilities, with a custom `pdb.Debugger` subclass that exposes commands to list agents and their status and set breakpoints on states (for FSM agents) * Simple debugging capabilities in `soil.debugging`, with a custom `pdb.Debugger` subclass that exposes commands to list agents and their status and set breakpoints on states (for FSM agents). Try it with `soil --debug <simulation file>`
* Ability to run
* Ability to
* The `soil.exporters` module to export the results of datacollectors (model.datacollector) into files at the end of trials/simulations
* A modular set of classes for environments/models. Now the ability to configure the agents through an agent definition and a topology through a network configuration is split into two classes (`soil.agents.BaseEnvironment` for agents, `soil.agents.NetworkEnvironment` to add topology).
* FSM agents can now have generators as states. They work similar to normal states, with one caveat. Only `time` values can be yielded, not a state. This is because the state will not change, it will be resumed after the yield, at the appropriate time. The return value *can* be a state, or a `(state, time)` tuple, just like in normal states.
### Changed ### Changed
* Configuration schema is very different now. Check `soil.config` for more information. We are also using Pydantic for (de)serialization. * Configuration schema is very different now. Check `soil.config` for more information. We are also using Pydantic for (de)serialization.
* There may be more than one topology/network in the simulation * There may be more than one topology/network in the simulation
* Agents are split into groups now. Each group may be assigned a given set of agents or an agent distribution, and a network topology to be assigned to. * Ability
### Removed ### Removed
* Any `tsih` and `History` integration in the main classes. To record the state of environments/agents, just use a datacollector. In some cases this may be slower or consume more memory than the previous system. However, few cases actually used the full potential of the history, and it came at the cost of unnecessary complexity and worse performance for the majority of cases. * Any `tsih` and `History` integration in the main classes. To record the state of environments/agents, just use a datacollector. In some cases this may be slower or consume more memory than the previous system. However, few cases actually used the full potential of the history, and it came at the cost of unnecessary complexity and worse performance for the majority of cases.
## [0.20.7] ## [0.20.7]
### Changed ### Changed
* Creating a `time.When` from another `time.When` does not nest them anymore (it returns the argument) * Creating a `time.When` from another `time.When` does not nest them anymore (it returns the argument)

View File

@ -10,19 +10,14 @@ seed: "CompleteSeed!"
model_class: Environment model_class: Environment
model_params: model_params:
am_i_complete: true am_i_complete: true
topologies: topology:
default: params:
params: generator: complete_graph
generator: complete_graph n: 12
n: 10
another_graph:
params:
generator: complete_graph
n: 2
environment: environment:
agents: agents:
agent_class: CounterModel agent_class: CounterModel
topology: default topology: true
state: state:
times: 1 times: 1
# In this group we are not specifying any topology # In this group we are not specifying any topology
@ -30,25 +25,23 @@ model_params:
- name: 'Environment Agent 1' - name: 'Environment Agent 1'
agent_class: BaseAgent agent_class: BaseAgent
group: environment group: environment
topology: null topology: false
hidden: true hidden: true
state: state:
times: 10 times: 10
- agent_class: CounterModel - agent_class: CounterModel
id: 0 id: 0
group: other_counters group: fixed_counters
topology: another_graph
state: state:
times: 1 times: 1
total: 0 total: 0
- agent_class: CounterModel - agent_class: CounterModel
topology: another_graph group: fixed_counters
group: other_counters
id: 1 id: 1
distribution: distribution:
- agent_class: CounterModel - agent_class: CounterModel
weight: 1 weight: 1
group: general_counters group: distro_counters
state: state:
times: 3 times: 3
- agent_class: AggregatedCounter - agent_class: AggregatedCounter

View File

@ -1,63 +0,0 @@
---
version: '2'
id: simple
group: tests
dir_path: "/tmp/"
num_trials: 3
max_steps: 100
interval: 1
seed: "CompleteSeed!"
model_class: "soil.Environment"
model_params:
topologies:
default:
params:
generator: complete_graph
n: 10
another_graph:
params:
generator: complete_graph
n: 2
agents:
# The values here will be used as default values for any agent
agent_class: CounterModel
topology: default
state:
times: 1
# This specifies a distribution of agents, each with a `weight` or an explicit number of agents
distribution:
- agent_class: CounterModel
weight: 1
# This is inherited from the default settings
#topology: default
state:
times: 3
- agent_class: AggregatedCounter
topology: default
weight: 0.2
fixed:
- name: 'Environment Agent 1'
# All the other agents will assigned to the 'default' group
group: environment
# Do not count this agent towards total limits
hidden: true
agent_class: soil.BaseAgent
topology: null
state:
times: 10
- agent_class: CounterModel
topology: another_graph
id: 0
state:
times: 1
total: 0
- agent_class: CounterModel
topology: another_graph
id: 1
override:
# 2 agents that match this filter will be updated to match the state {times: 5}
- filter:
agent_class: AggregatedCounter
n: 2
state:
times: 5

View File

@ -15,6 +15,7 @@ class Fibonacci(FSM):
prev, self['prev'] = self['prev'], max([self.now, self['prev']]) prev, self['prev'] = self['prev'], max([self.now, self['prev']])
return None, self.env.timeout(prev) return None, self.env.timeout(prev)
class Odds(FSM): class Odds(FSM):
'''Agent that only executes in odd t_steps''' '''Agent that only executes in odd t_steps'''
@default_state @default_state
@ -23,9 +24,8 @@ class Odds(FSM):
self.log('Stopping at {}'.format(self.now)) self.log('Stopping at {}'.format(self.now))
return None, self.env.timeout(1+self.now%2) return None, self.env.timeout(1+self.now%2)
if __name__ == '__main__': if __name__ == '__main__':
import logging
logging.basicConfig(level=logging.INFO)
from soil import Simulation from soil import Simulation
s = Simulation(network_agents=[{'ids': [0], 'agent_class': Fibonacci}, s = Simulation(network_agents=[{'ids': [0], 'agent_class': Fibonacci},
{'ids': [1], 'agent_class': Odds}], {'ids': [1], 'agent_class': Odds}],

View File

@ -8,17 +8,12 @@ interval: 1
seed: '1' seed: '1'
model_class: social_wealth.MoneyEnv model_class: social_wealth.MoneyEnv
model_params: model_params:
topologies: generator: social_wealth.graph_generator
default:
params:
generator: social_wealth.graph_generator
n: 5
agents: agents:
topology: true
distribution: distribution:
- agent_class: social_wealth.SocialMoneyAgent - agent_class: social_wealth.SocialMoneyAgent
topology: default
weight: 1 weight: 1
mesa_agent_class: social_wealth.MoneyAgent
N: 10 N: 10
width: 50 width: 50
height: 50 height: 50

View File

@ -2,6 +2,7 @@ from mesa.visualization.ModularVisualization import ModularServer
from soil.visualization import UserSettableParameter from soil.visualization import UserSettableParameter
from mesa.visualization.modules import ChartModule, NetworkModule, CanvasGrid from mesa.visualization.modules import ChartModule, NetworkModule, CanvasGrid
from social_wealth import MoneyEnv, graph_generator, SocialMoneyAgent from social_wealth import MoneyEnv, graph_generator, SocialMoneyAgent
import networkx as nx
class MyNetwork(NetworkModule): class MyNetwork(NetworkModule):
@ -13,15 +14,16 @@ def network_portrayal(env):
# The model ensures there is 0 or 1 agent per node # The model ensures there is 0 or 1 agent per node
portrayal = dict() portrayal = dict()
wealths = {node_id: data['agent'].wealth for (node_id, data) in env.G.nodes(data=True)}
portrayal["nodes"] = [ portrayal["nodes"] = [
{ {
"id": agent_id, "id": node_id,
"size": env.get_agent(agent_id).wealth, "size": 2*(wealth+1),
# "color": "#CC0000" if not agents or agents[0].wealth == 0 else "#007959", "color": "#CC0000" if wealth == 0 else "#007959",
"color": "#CC0000", # "color": "#CC0000",
"label": f"{agent_id}: {env.get_agent(agent_id).wealth}", "label": f"{node_id}: {wealth}",
} } for (node_id, wealth) in wealths.items()
for (agent_id) in env.G.nodes
] ]
portrayal["edges"] = [ portrayal["edges"] = [
@ -29,7 +31,6 @@ def network_portrayal(env):
for edge_id, (source, target) in enumerate(env.G.edges) for edge_id, (source, target) in enumerate(env.G.edges)
] ]
return portrayal return portrayal
@ -55,7 +56,7 @@ def gridPortrayal(agent):
} }
grid = MyNetwork(network_portrayal, 500, 500, library="sigma") grid = MyNetwork(network_portrayal, 500, 500)
chart = ChartModule( chart = ChartModule(
[{"Label": "Gini", "Color": "Black"}], data_collector_name="datacollector" [{"Label": "Gini", "Color": "Black"}], data_collector_name="datacollector"
) )
@ -70,7 +71,6 @@ model_params = {
1, 1,
description="Choose how many agents to include in the model", description="Choose how many agents to include in the model",
), ),
"network_agents": [{"agent_class": SocialMoneyAgent}],
"height": UserSettableParameter( "height": UserSettableParameter(
"slider", "slider",
"height", "height",
@ -89,12 +89,15 @@ model_params = {
1, 1,
description="Grid width", description="Grid width",
), ),
"network_params": { "agent_class": UserSettableParameter('choice', 'Agent class', value='MoneyAgent',
'generator': graph_generator choices=['MoneyAgent', 'SocialMoneyAgent']),
}, "generator": graph_generator,
} }
canvas_element = CanvasGrid(gridPortrayal, model_params["width"].value, model_params["height"].value, 500, 500)
canvas_element = CanvasGrid(gridPortrayal,
model_params["width"].value,
model_params["height"].value, 500, 500)
server = ModularServer( server = ModularServer(

View File

@ -10,7 +10,7 @@ from mesa.batchrunner import BatchRunner
import networkx as nx import networkx as nx
from soil import NetworkAgent, Environment from soil import NetworkAgent, Environment, serialization
def compute_gini(model): def compute_gini(model):
agent_wealths = [agent.wealth for agent in model.agents] agent_wealths = [agent.wealth for agent in model.agents]
@ -19,15 +19,16 @@ def compute_gini(model):
B = sum( xi * (N-i) for i,xi in enumerate(x) ) / (N*sum(x)) B = sum( xi * (N-i) for i,xi in enumerate(x) ) / (N*sum(x))
return (1 + (1/N) - 2*B) return (1 + (1/N) - 2*B)
class MoneyAgent(MesaAgent): class MoneyAgent(MesaAgent):
""" """
A MESA agent with fixed initial wealth. A MESA agent with fixed initial wealth.
It will only share wealth with neighbors based on grid proximity It will only share wealth with neighbors based on grid proximity
""" """
def __init__(self, unique_id, model): def __init__(self, unique_id, model, wealth=1):
super().__init__(unique_id=unique_id, model=model) super().__init__(unique_id=unique_id, model=model)
self.wealth = 1 self.wealth = wealth
def move(self): def move(self):
possible_steps = self.model.grid.get_neighborhood( possible_steps = self.model.grid.get_neighborhood(
@ -45,7 +46,7 @@ class MoneyAgent(MesaAgent):
self.wealth -= 1 self.wealth -= 1
def step(self): def step(self):
self.info("Crying wolf", self.pos) print("Crying wolf", self.pos)
self.move() self.move()
if self.wealth > 0: if self.wealth > 0:
self.give_money() self.give_money()
@ -58,8 +59,8 @@ class SocialMoneyAgent(NetworkAgent, MoneyAgent):
cellmates = set(self.model.grid.get_cell_list_contents([self.pos])) cellmates = set(self.model.grid.get_cell_list_contents([self.pos]))
friends = set(self.get_neighboring_agents()) friends = set(self.get_neighboring_agents())
self.info("Trying to give money") self.info("Trying to give money")
self.debug("Cellmates: ", cellmates) self.info("Cellmates: ", cellmates)
self.debug("Friends: ", friends) self.info("Friends: ", friends)
nearby_friends = list(cellmates & friends) nearby_friends = list(cellmates & friends)
@ -68,14 +69,29 @@ class SocialMoneyAgent(NetworkAgent, MoneyAgent):
other.wealth += 1 other.wealth += 1
self.wealth -= 1 self.wealth -= 1
def graph_generator(n=5):
G = nx.Graph()
for ix in range(n):
G.add_edge(0, ix)
return G
class MoneyEnv(Environment): class MoneyEnv(Environment):
"""A model with some number of agents.""" """A model with some number of agents."""
def __init__(self, width, height, *args, topologies, **kwargs): def __init__(self, width, height, N, generator=graph_generator,
agent_class=SocialMoneyAgent,
topology=None, **kwargs):
super().__init__(*args, topologies=topologies, **kwargs) generator = serialization.deserialize(generator)
agent_class = serialization.deserialize(agent_class, globs=globals())
topology = generator(n=N)
super().__init__(topology=topology,
N=N,
**kwargs)
self.grid = MultiGrid(width, height, False) self.grid = MultiGrid(width, height, False)
self.populate_network(agent_class=agent_class)
# Create agents # Create agents
for agent in self.agents: for agent in self.agents:
x = self.random.randrange(self.grid.width) x = self.random.randrange(self.grid.width)
@ -87,17 +103,9 @@ class MoneyEnv(Environment):
agent_reporters={"Wealth": "wealth"}) agent_reporters={"Wealth": "wealth"})
def graph_generator(n=5):
G = nx.Graph()
for ix in range(n):
G.add_edge(0, ix)
return G
if __name__ == '__main__': if __name__ == '__main__':
fixed_params = {"generator": nx.complete_graph,
G = graph_generator()
fixed_params = {"topology": G,
"width": 10, "width": 10,
"network_agents": [{"agent_class": SocialMoneyAgent, "network_agents": [{"agent_class": SocialMoneyAgent,
'weight': 1}], 'weight': 1}],
@ -116,4 +124,3 @@ if __name__ == '__main__':
run_data = batch_run.get_model_vars_dataframe() run_data = batch_run.get_model_vars_dataframe()
run_data.head() run_data.head()
print(run_data.Gini) print(run_data.Gini)

View File

@ -126,7 +126,7 @@ class Patron(FSM, NetworkAgent):
success depend on both agents' openness. success depend on both agents' openness.
''' '''
if force or self['openness'] > self.random.random(): if force or self['openness'] > self.random.random():
self.model.add_edge(self, other_agent) self.add_edge(self, other_agent)
self.info('Made some friend {}'.format(other_agent)) self.info('Made some friend {}'.format(other_agent))
return True return True
return False return False

View File

@ -57,7 +57,7 @@ class Male(RabbitModel):
class Female(RabbitModel): class Female(RabbitModel):
gestation = 100 gestation = 30
@state @state
def fertile(self): def fertile(self):
@ -72,10 +72,10 @@ class Female(RabbitModel):
self.pregnancy = -1 self.pregnancy = -1
self.set_state(self.pregnant, when=self.now) self.set_state(self.pregnant, when=self.now)
self.number_of_babies = int(8+4*self.random.random()) self.number_of_babies = int(8+4*self.random.random())
self.debug('I am pregnant')
@state @state
def pregnant(self): def pregnant(self):
self.debug('I am pregnant')
self.age += 1 self.age += 1
self.pregnancy += 1 self.pregnancy += 1
@ -88,7 +88,6 @@ class Female(RabbitModel):
state = {} state = {}
agent_class = self.random.choice([Male, Female]) agent_class = self.random.choice([Male, Female])
child = self.model.add_node(agent_class=agent_class, child = self.model.add_node(agent_class=agent_class,
topology=self.topology,
**state) **state)
child.add_edge(self) child.add_edge(self)
try: try:
@ -113,7 +112,7 @@ class RandomAccident(BaseAgent):
level = logging.INFO level = logging.INFO
def step(self): def step(self):
rabbits_alive = self.model.topology.number_of_nodes() rabbits_alive = self.model.G.number_of_nodes()
if not rabbits_alive: if not rabbits_alive:
return self.die() return self.die()
@ -121,10 +120,15 @@ class RandomAccident(BaseAgent):
prob_death = self.model.get('prob_death', 1e-100)*math.floor(math.log10(max(1, rabbits_alive))) prob_death = self.model.get('prob_death', 1e-100)*math.floor(math.log10(max(1, rabbits_alive)))
self.debug('Killing some rabbits with prob={}!'.format(prob_death)) self.debug('Killing some rabbits with prob={}!'.format(prob_death))
for i in self.iter_agents(agent_class=RabbitModel): for i in self.iter_agents(agent_class=RabbitModel):
if i.state.id == i.dead.id: if i.state_id == i.dead.id:
continue continue
if self.prob(prob_death): if self.prob(prob_death):
self.info('I killed a rabbit: {}'.format(i.id)) self.info('I killed a rabbit: {}'.format(i.id))
rabbits_alive -= 1 rabbits_alive -= 1
i.set_state(i.dead) i.set_state(i.dead)
self.debug('Rabbits alive: {}'.format(rabbits_alive)) self.debug('Rabbits alive: {}'.format(rabbits_alive))
if __name__ == '__main__':
from soil import easy
sim = easy('rabbits.yml')
sim.run()

View File

@ -10,18 +10,16 @@ max_time: 100
model_class: soil.environment.Environment model_class: soil.environment.Environment
model_params: model_params:
agents: agents:
topology: default topology: true
agent_class: rabbit_agents.RabbitModel agent_class: rabbit_agents.RabbitModel
distribution: distribution:
- agent_class: rabbit_agents.Male - agent_class: rabbit_agents.Male
topology: default
weight: 1 weight: 1
- agent_class: rabbit_agents.Female - agent_class: rabbit_agents.Female
topology: default
weight: 1 weight: 1
fixed: fixed:
- agent_class: rabbit_agents.RandomAccident - agent_class: rabbit_agents.RandomAccident
topology: null topology: false
hidden: true hidden: true
state: state:
group: environment group: environment
@ -29,13 +27,12 @@ model_params:
group: network group: network
mating_prob: 0.1 mating_prob: 0.1
prob_death: 0.001 prob_death: 0.001
topologies: topology:
default: fixed:
topology: directed: true
directed: true links: []
links: [] nodes:
nodes: - id: 1
- id: 1 - id: 0
- id: 0
extra: extra:
visualization_params: {} visualization_params: {}

View File

@ -10,18 +10,16 @@ max_time: 100
model_class: soil.environment.Environment model_class: soil.environment.Environment
model_params: model_params:
agents: agents:
topology: default topology: true
agent_class: rabbit_agents.RabbitModel agent_class: rabbit_agents.RabbitModel
distribution: distribution:
- agent_class: rabbit_agents.Male - agent_class: rabbit_agents.Male
topology: default
weight: 1 weight: 1
- agent_class: rabbit_agents.Female - agent_class: rabbit_agents.Female
topology: default
weight: 1 weight: 1
fixed: fixed:
- agent_class: rabbit_agents.RandomAccident - agent_class: rabbit_agents.RandomAccident
topology: null topology: false
hidden: true hidden: true
state: state:
group: environment group: environment
@ -29,13 +27,12 @@ model_params:
group: network group: network
mating_prob: 0.1 mating_prob: 0.1
prob_death: 0.001 prob_death: 0.001
topologies: topology:
default: fixed:
topology: directed: true
directed: true links: []
links: [] nodes:
nodes: - id: 1
- id: 1 - id: 0
- id: 0
extra: extra:
visualization_params: {} visualization_params: {}

View File

@ -4,7 +4,6 @@ Example of a fully programmatic simulation, without definition files.
''' '''
from soil import Simulation, agents from soil import Simulation, agents
from soil.time import Delta from soil.time import Delta
import logging
@ -40,5 +39,4 @@ s = Simulation(name='Programmatic',
dry_run=True) dry_run=True)
logging.basicConfig(level=logging.INFO)
envs = s.run() envs = s.run()

View File

@ -5,6 +5,6 @@ pyyaml>=5.1
pandas>=1 pandas>=1
SALib>=1.3 SALib>=1.3
Jinja2 Jinja2
Mesa>=1 Mesa>=1.1
pydantic>=1.9 pydantic>=1.9
sqlalchemy>=1.4 sqlalchemy>=1.4

View File

@ -53,6 +53,6 @@ setup(
include_package_data=True, include_package_data=True,
entry_points={ entry_points={
'console_scripts': 'console_scripts':
['soil = soil.__init__:main', ['soil = soil.__main__:main',
'soil-web = soil.web.__init__:main'] 'soil-web = soil.web.__init__:main']
}) })

View File

@ -21,7 +21,8 @@ from . import serialization
from .utils import logger from .utils import logger
from .time import * from .time import *
def main(cfg='simulation.yml', **kwargs):
def main(cfg='simulation.yml', exporters=None, parallel=None, output="soil_output", *, do_run=False, debug=False, **kwargs):
import argparse import argparse
from . import simulation from . import simulation
@ -48,16 +49,19 @@ def main(cfg='simulation.yml', **kwargs):
help='Dump all data collected in CSV format. Defaults to false.') help='Dump all data collected in CSV format. Defaults to false.')
parser.add_argument('--level', type=str, parser.add_argument('--level', type=str,
help='Logging level') help='Logging level')
parser.add_argument('--output', '-o', type=str, default="soil_output", parser.add_argument('--output', '-o', type=str, default=output or "soil_output",
help='folder to write results to. It defaults to the current directory.') help='folder to write results to. It defaults to the current directory.')
parser.add_argument('--synchronous', action='store_true', if parallel is None:
help='Run trials serially and synchronously instead of in parallel. Defaults to false.') parser.add_argument('--synchronous', action='store_true',
help='Run trials serially and synchronously instead of in parallel. Defaults to false.')
parser.add_argument('-e', '--exporter', action='append', parser.add_argument('-e', '--exporter', action='append',
default=[],
help='Export environment and/or simulations using this exporter') help='Export environment and/or simulations using this exporter')
parser.add_argument('--only-convert', '--convert', action='store_true', parser.add_argument('--only-convert', '--convert', action='store_true',
help='Do not run the simulation, only convert the configuration file(s) and output them.') help='Do not run the simulation, only convert the configuration file(s) and output them.')
parser.add_argument("--set", parser.add_argument("--set",
metavar="KEY=VALUE", metavar="KEY=VALUE",
action='append', action='append',
@ -74,32 +78,49 @@ def main(cfg='simulation.yml', **kwargs):
if args.version: if args.version:
return return
if parallel is None:
parallel = not args.synchronous
exporters = exporters or ['default', ]
for exp in args.exporter:
if exp not in exporters:
exporters.append(exp)
if args.csv:
exporters.append('csv')
if args.graph:
exporters.append('gexf')
if os.getcwd() not in sys.path: if os.getcwd() not in sys.path:
sys.path.append(os.getcwd()) sys.path.append(os.getcwd())
if args.module: if args.module:
importlib.import_module(args.module) importlib.import_module(args.module)
if output is None:
output = args.output
logger.info('Loading config file: {}'.format(args.file)) logger.info('Loading config file: {}'.format(args.file))
if args.pdb or args.debug: debug = debug or args.debug
args.synchronous = True
if args.debug:
os.environ['SOIL_DEBUG'] = 'true'
if args.pdb or debug:
args.synchronous = True
res = []
try: try:
exporters = list(args.exporter or ['default', ])
if args.csv:
exporters.append('csv')
if args.graph:
exporters.append('gexf')
exp_params = {} exp_params = {}
if args.dry_run:
exp_params['copy_to'] = sys.stdout
if not os.path.exists(args.file): if not os.path.exists(args.file):
logger.error('Please, input a valid file') logger.error('Please, input a valid file')
return return
for sim in simulation.iter_from_config(args.file):
for sim in simulation.iter_from_config(args.file,
dry_run=args.dry_run,
exporters=exporters,
parallel=parallel,
outdir=output,
exporter_params=exp_params,
**kwargs):
if args.set: if args.set:
for s in args.set: for s in args.set:
k, v = s.split('=', 1)[:2] k, v = s.split('=', 1)[:2]
@ -117,16 +138,14 @@ def main(cfg='simulation.yml', **kwargs):
except AttributeError: except AttributeError:
target[tail] = v target[tail] = v
if args.only_convert: if args.only_convert:
print(sim.to_yaml()) print(sim.to_yaml())
continue continue
if do_run:
sim.run_simulation(dry_run=args.dry_run, res.append(sim.run())
exporters=exporters, else:
parallel=(not args.synchronous), print('not running')
outdir=args.output, res.append(sim)
exporter_params=exp_params,
**kwargs)
except Exception as ex: except Exception as ex:
if args.pdb: if args.pdb:
@ -135,12 +154,16 @@ def main(cfg='simulation.yml', **kwargs):
post_mortem() post_mortem()
else: else:
raise raise
if debug:
from .debugging import set_trace
os.environ['SOIL_DEBUG'] = 'true'
set_trace()
return res
def easy(cfg, debug=False, **kwargs):
return main(cfg, **kwargs)[0]
def easy(cfg, debug=False):
sim = simulation.from_config(cfg)
if debug or os.environ.get('SOIL_DEBUG'):
from .debugging import setup
setup(sys._getframe().f_back)
return sim
if __name__ == '__main__': if __name__ == '__main__':
main() main(do_run=True)

View File

@ -1,4 +1,7 @@
from . import main from . import main as init_main
def main():
init_main(do_run=True)
if __name__ == '__main__': if __name__ == '__main__':
main() init_main(do_run=True)

View File

@ -47,7 +47,7 @@ class MetaAgent(ABCMeta):
} }
for attr, func in namespace.items(): for attr, func in namespace.items():
if isinstance(func, types.FunctionType) or isinstance(func, property) or attr[0] == '_': if isinstance(func, types.FunctionType) or isinstance(func, property) or isinstance(func, classmethod) or attr[0] == '_':
new_nmspc[attr] = func new_nmspc[attr] = func
elif attr == 'defaults': elif attr == 'defaults':
defaults.update(func) defaults.update(func)
@ -113,21 +113,18 @@ class BaseAgent(MesaAgent, MutableMapping, metaclass=MetaAgent):
def id(self): def id(self):
return self.unique_id return self.unique_id
@property @classmethod
def state(self): def from_dict(cls, model, attrs, warn_extra=True):
''' ignored = {}
Return the agent itself, which behaves as a dictionary. args = {}
for k, v in attrs.items():
This method shouldn't be used, but is kept here for backwards compatibility. if k in inspect.signature(cls).parameters:
''' args[k] = v
return self else:
ignored[k] = v
@state.setter if ignored and warn_extra:
def state(self, value): utils.logger.info(f'Ignoring the following arguments for agent class { agent_class.__name__ }: { ignored }')
if not value: return cls(model=model, **args)
return
for k, v in value.items():
self[k] = v
def __getitem__(self, key): def __getitem__(self, key):
try: try:
@ -232,18 +229,27 @@ class NetworkAgent(BaseAgent):
def __init__(self, *args, topology, node_id, **kwargs): def __init__(self, *args, topology, node_id, **kwargs):
super().__init__(*args, **kwargs) super().__init__(*args, **kwargs)
self.topology = topology assert topology is not None
self.node_id = node_id assert node_id is not None
self.G = self.model.topologies[topology] self.G = topology
assert self.G assert self.G
self.node_id = node_id
def count_neighboring_agents(self, state_id=None, **kwargs): def count_neighboring_agents(self, state_id=None, **kwargs):
return len(self.get_neighboring_agents(state_id=state_id, **kwargs)) return len(self.get_neighboring_agents(state_id=state_id, **kwargs))
def get_neighboring_agents(self, state_id=None, **kwargs): def get_neighboring_agents(self, **kwargs):
return self.get_agents(limit_neighbors=True, state_id=state_id, **kwargs) return list(self.iter_agents(limit_neighbors=True, **kwargs))
def iter_agents(self, unique_id=None, limit_neighbors=False, **kwargs): def add_edge(self, other):
self.topology.add_edge(self.node_id, other.node_id)
@property
def node(self):
return self.topology.nodes[self.node_id]
def iter_agents(self, unique_id=None, *, limit_neighbors=False, **kwargs):
unique_ids = None unique_ids = None
if isinstance(unique_id, list): if isinstance(unique_id, list):
unique_ids = set(unique_id) unique_ids = set(unique_id)
@ -253,7 +259,7 @@ class NetworkAgent(BaseAgent):
if limit_neighbors: if limit_neighbors:
neighbor_ids = set() neighbor_ids = set()
for node_id in self.G.neighbors(self.node_id): for node_id in self.G.neighbors(self.node_id):
if self.G.nodes[node_id].get('agent_id') is not None: if self.G.nodes[node_id].get('agent') is not None:
neighbor_ids.add(node_id) neighbor_ids.add(node_id)
if unique_ids: if unique_ids:
unique_ids = unique_ids & neighbor_ids unique_ids = unique_ids & neighbor_ids
@ -697,7 +703,7 @@ def filter_agents(agents, *id_args, unique_id=None, state_id=None, agent_class=N
state.update(kwargs) state.update(kwargs)
for k, v in state.items(): for k, v in state.items():
f = filter(lambda agent: agent.state.get(k, None) == v, f) f = filter(lambda agent: getattr(agent, k, None) == v, f)
if limit is not None: if limit is not None:
f = islice(f, limit) f = islice(f, limit)
@ -705,7 +711,7 @@ def filter_agents(agents, *id_args, unique_id=None, state_id=None, agent_class=N
yield from f yield from f
def from_config(cfg: config.AgentConfig, random, topologies: Dict[str, nx.Graph] = None) -> List[Dict[str, Any]]: def from_config(cfg: config.AgentConfig, random, topology: nx.Graph = None) -> List[Dict[str, Any]]:
''' '''
This function turns an agentconfig into a list of individual "agent specifications", which are just a dictionary This function turns an agentconfig into a list of individual "agent specifications", which are just a dictionary
with the parameters that the environment will use to construct each agent. with the parameters that the environment will use to construct each agent.
@ -716,40 +722,40 @@ def from_config(cfg: config.AgentConfig, random, topologies: Dict[str, nx.Graph]
default = cfg or config.AgentConfig() default = cfg or config.AgentConfig()
if not isinstance(cfg, config.AgentConfig): if not isinstance(cfg, config.AgentConfig):
cfg = config.AgentConfig(**cfg) cfg = config.AgentConfig(**cfg)
return _agents_from_config(cfg, topologies=topologies, random=random) return _agents_from_config(cfg, topology=topology, random=random)
def _agents_from_config(cfg: config.AgentConfig, def _agents_from_config(cfg: config.AgentConfig,
topologies: Dict[str, nx.Graph], topology: nx.Graph,
random) -> List[Dict[str, Any]]: random) -> List[Dict[str, Any]]:
if cfg and not isinstance(cfg, config.AgentConfig): if cfg and not isinstance(cfg, config.AgentConfig):
cfg = config.AgentConfig(**cfg) cfg = config.AgentConfig(**cfg)
agents = [] agents = []
assigned = defaultdict(int) assigned_total = 0
assigned_network = 0
if cfg.fixed is not None: if cfg.fixed is not None:
agents, counts = _from_fixed(cfg.fixed, topology=cfg.topology, default=cfg) agents, assigned_total, assigned_network = _from_fixed(cfg.fixed, topology=cfg.topology, default=cfg)
assigned.update(counts)
n = cfg.n n = cfg.n
if cfg.distribution: if cfg.distribution:
topo_size = {top: len(topologies[top]) for top in topologies} topo_size = len(topology) if topology else 0
grouped = defaultdict(list) networked = []
total = [] total = []
for d in cfg.distribution: for d in cfg.distribution:
if d.strategy == config.Strategy.topology: if d.strategy == config.Strategy.topology:
topology = d.topology if ('topology' in d.__fields_set__) else cfg.topology topo = d.topology if ('topology' in d.__fields_set__) else cfg.topology
if not topology: if not topo:
raise ValueError('The "topology" strategy only works if the topology parameter is specified') raise ValueError('The "topology" strategy only works if the topology parameter is set to True')
if topology not in topo_size: if not topo_size:
raise ValueError(f'Unknown topology selected: { topology }. Make sure the topology has been defined') raise ValueError(f'Topology does not have enough free nodes to assign one to the agent')
grouped[topology].append(d) networked.append(d)
if d.strategy == config.Strategy.total: if d.strategy == config.Strategy.total:
if not cfg.n: if not cfg.n:
@ -757,41 +763,36 @@ def _agents_from_config(cfg: config.AgentConfig,
total.append(d) total.append(d)
for (topo, distro) in grouped.items(): if networked:
if not topologies or topo not in topo_size: new_agents = _from_distro(networked,
raise ValueError( n= topo_size - assigned_network,
'You need to specify a target number of agents for the distribution \
or a configuration with a topology, along with a dictionary with \
all the available topologies')
n = len(topologies[topo])
target = topo_size[topo] - assigned[topo]
new_agents = _from_distro(cfg.distribution, target,
topology=topo, topology=topo,
default=cfg, default=cfg,
random=random) random=random)
assigned[topo] += len(new_agents) assigned_total += len(new_agents)
assigned_network += len(new_agents)
agents += new_agents agents += new_agents
if total: if total:
remaining = n - sum(assigned.values()) remaining = n - assigned_total
agents += _from_distro(total, remaining, agents += _from_distro(total, n=remaining,
topology='', # DO NOT assign to any topology default=cfg,
default=cfg, random=random)
random=random)
if sum(assigned.values()) != sum(topo_size.values()): if assigned_network < topo_size:
utils.logger.warn(f'The total number of agents does not match the total number of nodes in ' utils.logger.warn(f'The total number of agents does not match the total number of nodes in '
'every topology. This may be due to a definition error: assigned: ' 'every topology. This may be due to a definition error: assigned: '
f'{ assigned } total sizes: { topo_size }') f'{ assigned } total size: { topo_size }')
return agents return agents
def _from_fixed(lst: List[config.FixedAgentConfig], topology: str, default: config.SingleAgentConfig) -> List[Dict[str, Any]]: def _from_fixed(lst: List[config.FixedAgentConfig], topology: bool, default: config.SingleAgentConfig) -> List[Dict[str, Any]]:
agents = [] agents = []
counts = {} counts_total = 0
counts_network = 0
for fixed in lst: for fixed in lst:
agent = {} agent = {}
@ -803,12 +804,13 @@ def _from_fixed(lst: List[config.FixedAgentConfig], topology: str, default: conf
topo = fixed.topology if ('topology' in fixed.__fields_set__) else topology or default.topology topo = fixed.topology if ('topology' in fixed.__fields_set__) else topology or default.topology
if topo: if topo:
agent['topology'] = topo agent['topology'] = True
counts_network += 1
if not fixed.hidden: if not fixed.hidden:
counts[topo] = counts.get(topo, 0) + 1 counts_total += 1
agents.append(agent) agents.append(agent)
return agents, counts return agents, counts_total, counts_network
def _from_distro(distro: List[config.AgentDistro], def _from_distro(distro: List[config.AgentDistro],
@ -854,7 +856,6 @@ def _from_distro(distro: List[config.AgentDistro],
agent['agent_class'] = cls agent['agent_class'] = cls
if default: if default:
agent.update(default.state) agent.update(default.state)
# agent = cls(unique_id=agent_id, model=env, **state)
topology = d.topology if ('topology' in d.__fields_set__) else topology or default.topology topology = d.topology if ('topology' in d.__fields_set__) else topology or default.topology
if topology: if topology:
agent['topology'] = topology agent['topology'] = topology

View File

@ -43,7 +43,7 @@ class NetParams(BaseModel, extra=Extra.allow):
class NetConfig(BaseModel): class NetConfig(BaseModel):
params: Optional[NetParams] params: Optional[NetParams]
topology: Optional[Union[Topology, nx.Graph]] fixed: Optional[Union[Topology, nx.Graph]]
path: Optional[str] path: Optional[str]
class Config: class Config:
@ -70,7 +70,7 @@ class EnvConfig(BaseModel):
class SingleAgentConfig(BaseModel): class SingleAgentConfig(BaseModel):
agent_class: Optional[Union[Type, str]] = None agent_class: Optional[Union[Type, str]] = None
unique_id: Optional[int] = None unique_id: Optional[int] = None
topology: Optional[str] = None topology: Optional[bool] = False
node_id: Optional[Union[int, str]] = None node_id: Optional[Union[int, str]] = None
state: Optional[Dict[str, Any]] = {} state: Optional[Dict[str, Any]] = {}
@ -81,8 +81,8 @@ class FixedAgentConfig(SingleAgentConfig):
@root_validator @root_validator
def validate_all(cls, values): def validate_all(cls, values):
if values.get('agent_id', None) is not None and values.get('n', 1) > 1: if values.get('unique_id', None) is not None and values.get('n', 1) > 1:
raise ValueError(f"An agent_id can only be provided when there is only one agent ({values.get('n')} given)") raise ValueError(f"An unique_id can only be provided when there is only one agent ({values.get('n')} given)")
return values return values
@ -102,7 +102,6 @@ class AgentDistro(SingleAgentConfig):
class AgentConfig(SingleAgentConfig): class AgentConfig(SingleAgentConfig):
n: Optional[int] = None n: Optional[int] = None
topology: Optional[str]
distribution: Optional[List[AgentDistro]] = None distribution: Optional[List[AgentDistro]] = None
fixed: Optional[List[FixedAgentConfig]] = None fixed: Optional[List[FixedAgentConfig]] = None
override: Optional[List[OverrideAgentConfig]] = None override: Optional[List[OverrideAgentConfig]] = None
@ -171,9 +170,9 @@ def convert_old(old, strict=True):
else: else:
network.setdefault('params', {})[k] = v network.setdefault('params', {})[k] = v
topologies = {} topology = None
if network: if network:
topologies['default'] = network topology = network
agents = {'fixed': [], 'distribution': []} agents = {'fixed': [], 'distribution': []}
@ -195,7 +194,7 @@ def convert_old(old, strict=True):
agent['state']['name'] = agent['agent_id'] agent['state']['name'] = agent['agent_id']
del agent['agent_id'] del agent['agent_id']
agent['hidden'] = True agent['hidden'] = True
agent['topology'] = None agent['topology'] = False
fixed.append(updated_agent(agent)) fixed.append(updated_agent(agent))
del new['environment_agents'] del new['environment_agents']
@ -209,7 +208,7 @@ def convert_old(old, strict=True):
agents['state'] = old['default_state'] agents['state'] = old['default_state']
if 'network_agents' in old: if 'network_agents' in old:
agents['topology'] = 'default' agents['topology'] = True
agents.setdefault('state', {})['group'] = 'network' agents.setdefault('state', {})['group'] = 'network'
@ -224,7 +223,7 @@ def convert_old(old, strict=True):
del new['network_agents'] del new['network_agents']
if 'agent_class' in old and (not fixed and not by_weight): if 'agent_class' in old and (not fixed and not by_weight):
agents['topology'] = 'default' agents['topology'] = True
by_weight = [{'agent_class': old['agent_class'], 'weight': 1}] by_weight = [{'agent_class': old['agent_class'], 'weight': 1}]
@ -258,7 +257,7 @@ def convert_old(old, strict=True):
del new['dump'] del new['dump']
new['dry_run'] = not old['dump'] new['dry_run'] = not old['dump']
model_params['topologies'] = topologies model_params['topology'] = topology
model_params['agents'] = agents model_params['agents'] = agents
return Config(version='2', return Config(version='2',

View File

@ -30,7 +30,9 @@ def wrapcmd(func):
class Debug(pdb.Pdb): class Debug(pdb.Pdb):
def __init__(self, *args, skip_soil=False, **kwargs): def __init__(self, *args, skip_soil=False, **kwargs):
skip = kwargs.get('skip', []) skip = kwargs.get('skip', [])
skip.append('soil')
if skip_soil: if skip_soil:
skip.append('soil')
skip.append('soil.*') skip.append('soil.*')
skip.append('mesa.*') skip.append('mesa.*')
super(Debug, self).__init__(*args, skip=skip, **kwargs) super(Debug, self).__init__(*args, skip=skip, **kwargs)
@ -54,8 +56,14 @@ class Debug(pdb.Pdb):
do_sl = do_soil_list do_sl = do_soil_list
def do_continue_state(self, arg):
self.do_break_state(arg, temporary=True)
return self.do_continue('')
do_cs = do_continue_state
@wrapcmd @wrapcmd
def do_soil_self(): def do_soil_agent():
if not agent: if not agent:
print('No agent available') print('No agent available')
return return
@ -70,23 +78,31 @@ class Debug(pdb.Pdb):
print(agent.to_str(pretty=True, keys=keys)) print(agent.to_str(pretty=True, keys=keys))
do_ss = do_soil_self do_aa = do_soil_agent
def do_break_state(self, arg: str, temporary=False): def do_break_state(self, arg: str, instances=None, temporary=False):
''' '''
Break before a specified state is stepped into. Break before a specified state is stepped into.
''' '''
klass = None klass = None
state = arg.strip() state = arg
if not state: if not state:
self.error("Specify at least a state name") self.error("Specify at least a state name")
return return
comma = arg.find(':') state, *tokens = state.lstrip().split()
if comma > 0: if tokens:
state = arg[comma+1:].lstrip() instances = list(eval(token) for token in tokens)
klass = arg[:comma].rstrip()
colon = state.find(':')
if colon > 0:
klass = state[:colon].rstrip()
state = state[colon+1:].strip()
print(klass, state, tokens)
klass = eval(klass, klass = eval(klass,
self.curframe.f_globals, self.curframe.f_globals,
self.curframe_locals) self.curframe_locals)
@ -95,14 +111,16 @@ class Debug(pdb.Pdb):
klasses = [klass] klasses = [klass]
else: else:
klasses = [k for k in self.curframe.f_globals.values() if isinstance(k, type) and issubclass(k, FSM)] klasses = [k for k in self.curframe.f_globals.values() if isinstance(k, type) and issubclass(k, FSM)]
print(klasses)
if not klasses: if not klasses:
self.error('No agent classes found') self.error('No agent classes found')
for klass in klasses: for klass in klasses:
try: try:
func = getattr(klass, state) func = getattr(klass, state)
except AttributeError: except AttributeError:
self.error(f'State {state} not found in class {klass}')
continue continue
if hasattr(func, '__func__'): if hasattr(func, '__func__'):
func = func.__func__ func = func.__func__
@ -120,6 +138,9 @@ class Debug(pdb.Pdb):
raise ValueError('no line found') raise ValueError('no line found')
# now set the break point # now set the break point
cond = None cond = None
if instances:
cond = f'self.unique_id in { repr(instances) }'
existing = self.get_breaks(filename, line) existing = self.get_breaks(filename, line)
if existing: if existing:
self.message("Breakpoint already exists at %s:%d" % self.message("Breakpoint already exists at %s:%d" %
@ -132,20 +153,39 @@ class Debug(pdb.Pdb):
bp = self.get_breaks(filename, line)[-1] bp = self.get_breaks(filename, line)[-1]
self.message("Breakpoint %d at %s:%d" % self.message("Breakpoint %d at %s:%d" %
(bp.number, bp.file, bp.line)) (bp.number, bp.file, bp.line))
do_bs = do_break_state do_bs = do_break_state
def do_break_state_self(self, arg: str, temporary=False):
'''
Break before a specified state is stepped into, for the current agent
'''
agent = self.curframe.f_locals.get('self')
if not agent:
self.error('No current agent.')
self.error('Try this again when the debugger is stopped inside an agent')
return
def setup(frame=None): arg = f'{agent.__class__.__name__}:{ arg } {agent.unique_id}'
debugger = Debug() return self.do_break_state(arg)
do_bss = do_break_state_self
debugger = None
def set_trace(frame=None, **kwargs):
global debugger
if debugger is None:
debugger = Debug(**kwargs)
frame = frame or sys._getframe().f_back frame = frame or sys._getframe().f_back
debugger.set_trace(frame) debugger.set_trace(frame)
def debug_env():
if os.environ.get('SOIL_DEBUG'):
return setup(frame=sys._getframe().f_back)
def post_mortem(traceback=None): def post_mortem(traceback=None):
p = Debug() global debugger
if debugger is None:
debugger = Debug(**kwargs)
t = sys.exc_info()[2] t = sys.exc_info()[2]
p.reset() debugger.reset()
p.interaction(None, t) debugger.interaction(None, t)

View File

@ -5,6 +5,7 @@ import sqlite3
import math import math
import random import random
import logging import logging
import inspect
from typing import Any, Dict, Optional, Union from typing import Any, Dict, Optional, Union
from collections import namedtuple from collections import namedtuple
@ -21,9 +22,6 @@ from mesa.datacollection import DataCollector
from . import agents as agentmod, config, serialization, utils, time, network from . import agents as agentmod, config, serialization, utils, time, network
Record = namedtuple('Record', 'dict_id t_step key value')
class BaseEnvironment(Model): class BaseEnvironment(Model):
""" """
The environment is key in a simulation. It controls how agents interact, The environment is key in a simulation. It controls how agents interact,
@ -51,6 +49,8 @@ class BaseEnvironment(Model):
**env_params): **env_params):
super().__init__(seed=seed) super().__init__(seed=seed)
self.env_params = env_params or {}
self.current_id = -1 self.current_id = -1
self.id = id self.id = id
@ -63,11 +63,8 @@ class BaseEnvironment(Model):
self.agent_class = agent_class or agentmod.BaseAgent self.agent_class = agent_class or agentmod.BaseAgent
self.init_agents(agents)
self.env_params = env_params or {}
self.interval = interval self.interval = interval
self.init_agents(agents)
self.logger = utils.logger.getChild(self.id) self.logger = utils.logger.getChild(self.id)
@ -77,7 +74,10 @@ class BaseEnvironment(Model):
tables=tables, tables=tables,
) )
def _read_single_agent(self, agent): def _agent_from_dict(self, agent):
'''
Translate an agent dictionary into an agent
'''
agent = dict(**agent) agent = dict(**agent)
cls = agent.pop('agent_class', None) or self.agent_class cls = agent.pop('agent_class', None) or self.agent_class
unique_id = agent.pop('unique_id', None) unique_id = agent.pop('unique_id', None)
@ -88,6 +88,14 @@ class BaseEnvironment(Model):
model=self, **agent) model=self, **agent)
def init_agents(self, agents: Union[config.AgentConfig, [Dict[str, Any]]] = {}): def init_agents(self, agents: Union[config.AgentConfig, [Dict[str, Any]]] = {}):
'''
Initialize the agents in the model from either a `soil.config.AgentConfig` or a list of
dictionaries that each describes an agent.
If given a list of dictionaries, an agent will be created for each dictionary. The agent
class can be specified through the `agent_class` key. The rest of the items will be used
as parameters to the agent.
'''
if not agents: if not agents:
return return
@ -98,13 +106,11 @@ class BaseEnvironment(Model):
lst = config.AgentConfig(**agents) lst = config.AgentConfig(**agents)
if lst.override: if lst.override:
override = lst.override override = lst.override
lst = agentmod.from_config(lst, lst = self._agent_dict_from_config(lst)
topologies=getattr(self, 'topologies', None),
random=self.random)
#TODO: check override is working again. It cannot (easily) be part of agents.from_config anymore, #TODO: check override is working again. It cannot (easily) be part of agents.from_config anymore,
# because it needs attribute such as unique_id, which are only present after init # because it needs attribute such as unique_id, which are only present after init
new_agents = [self._read_single_agent(agent) for agent in lst] new_agents = [self._agent_from_dict(agent) for agent in lst]
for a in new_agents: for a in new_agents:
@ -115,6 +121,9 @@ class BaseEnvironment(Model):
for attr, value in rule.state.items(): for attr, value in rule.state.items():
setattr(agent, attr, value) setattr(agent, attr, value)
def _agent_dict_from_config(self, cfg):
return agentmod.from_config(cfg,
random=self.random)
@property @property
def agents(self): def agents(self):
@ -133,12 +142,15 @@ class BaseEnvironment(Model):
raise Exception('The environment has not been scheduled, so it has no sense of time') raise Exception('The environment has not been scheduled, so it has no sense of time')
def add_agent(self, agent_id, agent_class, **kwargs): def add_agent(self, agent_class, unique_id=None, **kwargs):
a = None a = None
if agent_class: if unique_id is None:
a = agent_class(model=self, unique_id = self.next_id()
unique_id=agent_id,
**kwargs)
a = agent_class(model=self,
unique_id=unique_id,
**args)
self.schedule.add(a) self.schedule.add(a)
return a return a
@ -180,123 +192,109 @@ class BaseEnvironment(Model):
def __setitem__(self, key, value): def __setitem__(self, key, value):
return self.env_params.__setitem__(key, value) return self.env_params.__setitem__(key, value)
def _agent_to_tuples(self, agent, now=None): def __str__(self):
if now is None: return str(self.env_params)
now = self.now
for k, v in agent.state.items():
yield Record(dict_id=agent.id,
t_step=now,
key=k,
value=v)
def state_to_tuples(self, agent_id=None, now=None):
if now is None:
now = self.now
if agent_id:
agent = self.agents[agent_id]
yield from self._agent_to_tuples(agent, now)
return
for k, v in self.env_params.items():
yield Record(dict_id='env',
t_step=now,
key=k,
value=v)
for agent in self.agents:
yield from self._agent_to_tuples(agent, now)
class NetworkEnvironment(BaseEnvironment): class NetworkEnvironment(BaseEnvironment):
'''
The NetworkEnvironment is an environment that includes one or more networkx.Graph intances
and methods to associate agents to nodes and vice versa.
'''
def __init__(self, *args, topology: nx.Graph = None, topologies: Dict[str, config.NetConfig] = {}, **kwargs): def __init__(self, *args, topology: Union[config.NetConfig, nx.Graph] = None, **kwargs):
agents = kwargs.pop('agents', None) agents = kwargs.pop('agents', None)
super().__init__(*args, agents=None, **kwargs) super().__init__(*args, agents=None, **kwargs)
self._node_ids = {}
assert not hasattr(self, 'topologies')
if topology is not None:
if topologies:
raise ValueError('Please, provide either a single topology or a dictionary of them')
topologies = {'default': topology}
self.topologies = {} self._set_topology(topology)
for (name, cfg) in topologies.items():
self.set_topology(cfg=cfg, graph=name)
self.init_agents(agents) self.init_agents(agents)
def init_agents(self, *args, **kwargs):
'''Initialize the agents from a '''
super().init_agents(*args, **kwargs)
for agent in self.schedule._agents.values():
if hasattr(agent, 'node_id'):
self._init_node(agent)
def _read_single_agent(self, agent, unique_id=None): def _init_node(self, agent):
'''
Make sure the node for a given agent has the proper attributes.
'''
self.G.nodes[agent.node_id]['agent'] = agent
def _agent_dict_from_config(self, cfg):
return agentmod.from_config(cfg,
topology=self.G,
random=self.random)
def _agent_from_dict(self, agent, unique_id=None):
agent = dict(agent) agent = dict(agent)
if agent.get('topology', None) is not None: if not agent.get('topology', False):
topology = agent.get('topology') return super()._agent_from_dict(agent)
if unique_id is None:
unique_id = self.next_id()
if topology:
node_id = self.agent_to_node(unique_id, graph_name=topology, node_id=agent.get('node_id'))
agent['node_id'] = node_id
agent['topology'] = topology
agent['unique_id'] = unique_id
return super()._read_single_agent(agent) if unique_id is None:
unique_id = self.next_id()
node_id = agent.get('node_id', None)
if node_id is None:
node_id = network.find_unassigned(self.G, random=self.random)
agent['node_id'] = node_id
agent['unique_id'] = unique_id
agent['topology'] = self.G
node_attrs = self.G.nodes[node_id]
node_attrs.update(agent)
agent = node_attrs
@property a = super()._agent_from_dict(agent)
def topology(self): self._init_node(a)
return self.topologies['default']
def set_topology(self, cfg=None, dir_path=None, graph='default'): return a
topology = cfg
if not isinstance(cfg, nx.Graph):
topology = network.from_config(cfg, dir_path=dir_path or self.dir_path)
self.topologies[graph] = topology def _set_topology(self, cfg=None, dir_path=None):
if cfg is None:
cfg = nx.Graph()
elif not isinstance(cfg, nx.Graph):
cfg = network.from_config(cfg, dir_path=dir_path or self.dir_path)
def topology_for(self, unique_id): self.G = cfg
return self.topologies[self._node_ids[unique_id][0]]
@property @property
def network_agents(self): def network_agents(self):
yield from self.agents(agent_class=agentmod.NetworkAgent) for a in self.schedule._agents:
if isinstance(a, agentmod.NetworkAgent):
yield a
def agent_to_node(self, unique_id, graph_name='default', def add_node(self, agent_class, unique_id=None, node_id=None, **kwargs):
node_id=None, shuffle=False): if unique_id is None:
node_id = network.agent_to_node(G=self.topologies[graph_name], unique_id = self.next_id()
agent_id=unique_id, if node_id is None:
node_id=node_id, node_id = network.find_unassigned(G=self.G,
shuffle=shuffle, shuffle=True,
random=self.random) random=self.random)
if node_id in G.nodes:
self.G.nodes[node_id]['agent'] = None # Reserve
else:
self.G.add_node(node_id)
self._node_ids[unique_id] = (graph_name, node_id) a = self.add_agent(unique_id=unique_id, agent_class=agent_class, node_id=node_id, **kwargs)
return node_id
def add_node(self, agent_class, topology, **kwargs):
unique_id = self.next_id()
self.topologies[topology].add_node(unique_id)
node_id = self.agent_to_node(unique_id=unique_id, node_id=unique_id, graph_name=topology)
a = self.add_agent(unique_id=unique_id, agent_class=agent_class, node_id=node_id, topology=topology, **kwargs)
a['visible'] = True a['visible'] = True
return a return a
def add_edge(self, agent1, agent2, start=None, graph='default', **attrs): def agent_for_node_id(self, node_id):
agent1 = agent1.node_id return self.G.nodes[node_id].get('agent')
agent2 = agent2.node_id
return self.topologies[graph].add_edge(agent1, agent2, start=start)
def add_agent(self, unique_id, state=None, graph='default', **kwargs): def populate_network(self, agent_class, weights=None, **agent_params):
node = self.topologies[graph].nodes[unique_id] if not hasattr(agent_class, 'len'):
node_state = node.get('state', {}) agent_class = [agent_class]
if node_state: weights = None
node_state.update(state or {}) for (node_id, node) in self.G.nodes(data=True):
state = node_state if 'agent' in node:
a = super().add_agent(unique_id, state=state, **kwargs) continue
node['agent'] = a a_class = self.random.choices(agent_class, weights)[0]
return a self.add_agent(node_id=node_id,
agent_class=a_class, **agent_params)
def node_id_for(self, agent_id):
return self._node_ids[agent_id][1]
Environment = NetworkEnvironment Environment = NetworkEnvironment

View File

@ -1,4 +1,5 @@
import os import os
import sys
from time import time as current_time from time import time as current_time
from io import BytesIO from io import BytesIO
from sqlalchemy import create_engine from sqlalchemy import create_engine
@ -52,6 +53,8 @@ class Exporter:
simulation.group or '', simulation.group or '',
simulation.name) simulation.name)
self.dry_run = dry_run self.dry_run = dry_run
if copy_to is None and dry_run:
copy_to = sys.stdout
self.copy_to = copy_to self.copy_to = copy_to
def sim_start(self): def sim_start(self):
@ -94,14 +97,19 @@ class default(Exporter):
logger.info('NOT dumping results') logger.info('NOT dumping results')
def trial_end(self, env): def trial_end(self, env):
if not self.dry_run: if self.dry_run:
with timer('Dumping simulation {} trial {}'.format(self.simulation.name, logger.info('Running in DRY_RUN mode, the database will NOT be created')
env.id)): return
engine = create_engine('sqlite:///{}.sqlite'.format(env.id), echo=False)
dc = env.datacollector with timer('Dumping simulation {} trial {}'.format(self.simulation.name,
for (t, df) in get_dc_dfs(dc): env.id)):
df.to_sql(t, con=engine, if_exists='append')
fpath = os.path.join(self.outdir, f'{env.id}.sqlite')
engine = create_engine(f'sqlite:///{fpath}', echo=False)
dc = env.datacollector
for (t, df) in get_dc_dfs(dc):
df.to_sql(t, con=engine, if_exists='append')
def get_dc_dfs(dc): def get_dc_dfs(dc):

View File

@ -39,33 +39,30 @@ def from_config(cfg: config.NetConfig, dir_path: str = None):
known_modules=['networkx.generators',]) known_modules=['networkx.generators',])
return method(**net_args) return method(**net_args)
if isinstance(cfg.topology, config.Topology): if isinstance(cfg.fixed, config.Topology):
cfg = cfg.topology.dict() cfg = cfg.fixed.dict()
if isinstance(cfg, str) or isinstance(cfg, dict): if isinstance(cfg, str) or isinstance(cfg, dict):
return nx.json_graph.node_link_graph(cfg) return nx.json_graph.node_link_graph(cfg)
return nx.Graph() return nx.Graph()
def agent_to_node(G, agent_id, node_id=None, shuffle=False, random=random): def find_unassigned(G, shuffle=False, random=random):
''' '''
Link an agent to a node in a topology. Link an agent to a node in a topology.
If node_id is None, a node without an agent_id will be found. If node_id is None, a node without an agent_id will be found.
''' '''
#TODO: test #TODO: test
if node_id is None: candidates = list(G.nodes(data=True))
candidates = list(G.nodes(data=True)) if shuffle:
if shuffle: random.shuffle(candidates)
random.shuffle(candidates) for next_id, data in candidates:
for next_id, data in candidates: if 'agent' not in data:
if data.get('agent_id', None) is None: node_id = next_id
node_id = next_id break
break
if node_id is None:
raise ValueError(f"Not enough nodes in topology to assign one to agent {agent_id}")
G.nodes[node_id]['agent_id'] = agent_id
return node_id return node_id

View File

@ -17,42 +17,6 @@ from jinja2 import Template
logger = logging.getLogger('soil') logger = logging.getLogger('soil')
# def load_network(network_params, dir_path=None):
# G = nx.Graph()
# if not network_params:
# return G
# if 'path' in network_params:
# path = network_params['path']
# if dir_path and not os.path.isabs(path):
# path = os.path.join(dir_path, path)
# extension = os.path.splitext(path)[1][1:]
# kwargs = {}
# if extension == 'gexf':
# kwargs['version'] = '1.2draft'
# kwargs['node_type'] = int
# try:
# method = getattr(nx.readwrite, 'read_' + extension)
# except AttributeError:
# raise AttributeError('Unknown format')
# G = method(path, **kwargs)
# elif 'generator' in network_params:
# net_args = network_params.copy()
# net_gen = net_args.pop('generator')
# if dir_path not in sys.path:
# sys.path.append(dir_path)
# method = deserializer(net_gen,
# known_modules=['networkx.generators',])
# G = method(**net_args)
# return G
def load_file(infile): def load_file(infile):
folder = os.path.dirname(infile) folder = os.path.dirname(infile)
if folder not in sys.path: if folder not in sys.path:
@ -121,7 +85,7 @@ def params_for_template(config):
def load_files(*patterns, **kwargs): def load_files(*patterns, **kwargs):
for pattern in patterns: for pattern in patterns:
for i in glob(pattern, **kwargs): for i in glob(pattern, **kwargs, recursive=True):
for cfg in load_file(i): for cfg in load_file(i):
path = os.path.abspath(i) path = os.path.abspath(i)
yield Config.from_raw(cfg), path yield Config.from_raw(cfg), path
@ -229,14 +193,17 @@ def deserializer(type_, known_modules=KNOWN_MODULES):
return getattr(cls, 'deserialize', cls) return getattr(cls, 'deserialize', cls)
except (ImportError, AttributeError) as ex: except (ImportError, AttributeError) as ex:
errors.append((modname, tname, ex)) errors.append((modname, tname, ex))
raise Exception('Could not find type {}. Tried: {}'.format(type_, errors)) raise Exception('Could not find type "{}". Tried: {}'.format(type_, errors))
def deserialize(type_, value=None, **kwargs): def deserialize(type_, value=None, globs=None, **kwargs):
'''Get an object from a text representation''' '''Get an object from a text representation'''
if not isinstance(type_, str): if not isinstance(type_, str):
return type_ return type_
des = deserializer(type_, **kwargs) if globs and type_ in globs:
des = globs[type_]
else:
des = deserializer(type_, **kwargs)
if value is None: if value is None:
return des return des
return des(value) return des(value)
@ -244,6 +211,8 @@ def deserialize(type_, value=None, **kwargs):
def deserialize_all(names, *args, known_modules=KNOWN_MODULES, **kwargs): def deserialize_all(names, *args, known_modules=KNOWN_MODULES, **kwargs):
'''Return the list of deserialized objects''' '''Return the list of deserialized objects'''
#TODO: remove
print('SERIALIZATION', kwargs)
objects = [] objects = []
for name in names: for name in names:
mod = deserialize(name, known_modules=known_modules) mod = deserialize(name, known_modules=known_modules)

View File

@ -11,17 +11,16 @@ import networkx as nx
from textwrap import dedent from textwrap import dedent
from dataclasses import dataclass, field, asdict from dataclasses import dataclass, field, asdict
from typing import Any, Dict, Union, Optional from typing import Any, Dict, Union, Optional, List
from networkx.readwrite import json_graph from networkx.readwrite import json_graph
from functools import partial from functools import partial
import pickle import pickle
from . import serialization, utils, basestring, agents from . import serialization, exporters, utils, basestring, agents
from .environment import Environment from .environment import Environment
from .utils import logger, run_and_return_exceptions from .utils import logger, run_and_return_exceptions
from .exporters import default
from .time import INFINITY from .time import INFINITY
from .config import Config, convert_old from .config import Config, convert_old
@ -35,7 +34,7 @@ class Simulation:
config (optional): :class:`config.Config` config (optional): :class:`config.Config`
name of the Simulation name of the Simulation
kwargs: parameters to use to initialize a new configuration, if one has not been provided. kwargs: parameters to use to initialize a new configuration, if one not been provided.
""" """
version: str = '2' version: str = '2'
name: str = 'Unnamed simulation' name: str = 'Unnamed simulation'
@ -49,22 +48,27 @@ class Simulation:
max_steps: int = -1 max_steps: int = -1
interval: int = 1 interval: int = 1
num_trials: int = 3 num_trials: int = 3
parallel: Optional[bool] = None
exporters: Optional[List[str]] = field(default_factory=list)
outdir: Optional[str] = None
exporter_params: Optional[Dict[str, Any]] = field(default_factory=dict)
dry_run: bool = False dry_run: bool = False
extra: Dict[str, Any] = field(default_factory=dict) extra: Dict[str, Any] = field(default_factory=dict)
@classmethod @classmethod
def from_dict(cls, env): def from_dict(cls, env, **kwargs):
ignored = {k: v for k, v in env.items() ignored = {k: v for k, v in env.items()
if k not in inspect.signature(cls).parameters} if k not in inspect.signature(cls).parameters}
kwargs = {k:v for k, v in env.items() if k not in ignored} d = {k:v for k, v in env.items() if k not in ignored}
if ignored: if ignored:
kwargs.setdefault('extra', {}).update(ignored) d.setdefault('extra', {}).update(ignored)
if ignored: if ignored:
print(f'Warning: Ignoring these parameters (added to "extra"): { ignored }') print(f'Warning: Ignoring these parameters (added to "extra"): { ignored }')
d.update(kwargs)
return cls(**kwargs) return cls(**d)
def run_simulation(self, *args, **kwargs): def run_simulation(self, *args, **kwargs):
return self.run(*args, **kwargs) return self.run(*args, **kwargs)
@ -78,15 +82,23 @@ class Simulation:
self.to_yaml()) self.to_yaml())
return list(self.run_gen(*args, **kwargs)) return list(self.run_gen(*args, **kwargs))
def run_gen(self, parallel=False, dry_run=False, def run_gen(self, parallel=False, dry_run=None,
exporters=[default, ], outdir=None, exporter_params={}, exporters=None, outdir=None, exporter_params={},
log_level=None, log_level=None,
**kwargs): **kwargs):
'''Run the simulation and yield the resulting environments.''' '''Run the simulation and yield the resulting environments.'''
if log_level: if log_level:
logger.setLevel(log_level) logger.setLevel(log_level)
outdir = outdir or self.outdir
logger.info('Using exporters: %s', exporters or []) logger.info('Using exporters: %s', exporters or [])
logger.info('Output directory: %s', outdir) logger.info('Output directory: %s', outdir)
if dry_run is None:
dry_run = self.dry_run
if exporters is None:
exporters = self.exporters
if not exporter_params:
exporter_params = self.exporter_params
exporters = serialization.deserialize_all(exporters, exporters = serialization.deserialize_all(exporters,
simulation=self, simulation=self,
known_modules=['soil.exporters', ], known_modules=['soil.exporters', ],
@ -115,18 +127,21 @@ class Simulation:
for exporter in exporters: for exporter in exporters:
exporter.sim_end() exporter.sim_end()
def get_env(self, trial_id=0, **kwargs): def get_env(self, trial_id=0, model_params=None, **kwargs):
'''Create an environment for a trial of the simulation''' '''Create an environment for a trial of the simulation'''
def deserialize_reporters(reporters): def deserialize_reporters(reporters):
for (k, v) in reporters.items(): for (k, v) in reporters.items():
if isinstance(v, str) and v.startswith('py:'): if isinstance(v, str) and v.startswith('py:'):
reporters[k] = serialization.deserialize(value.lsplit(':', 1)[1]) reporters[k] = serialization.deserialize(value.lsplit(':', 1)[1])
return reporters
model_params = self.model_params.copy() params = self.model_params.copy()
model_params.update(kwargs) if model_params:
params.update(model_params)
params.update(kwargs)
agent_reporters = deserialize_reporters(model_params.pop('agent_reporters', {})) agent_reporters = deserialize_reporters(params.pop('agent_reporters', {}))
model_reporters = deserialize_reporters(model_params.pop('model_reporters', {})) model_reporters = deserialize_reporters(params.pop('model_reporters', {}))
env = serialization.deserialize(self.model_class) env = serialization.deserialize(self.model_class)
return env(id=f'{self.name}_trial_{trial_id}', return env(id=f'{self.name}_trial_{trial_id}',
@ -134,7 +149,7 @@ class Simulation:
dir_path=self.dir_path, dir_path=self.dir_path,
agent_reporters=agent_reporters, agent_reporters=agent_reporters,
model_reporters=model_reporters, model_reporters=model_reporters,
**model_params) **params)
def run_trial(self, trial_id=None, until=None, log_file=False, log_level=logging.INFO, **opts): def run_trial(self, trial_id=None, until=None, log_file=False, log_level=logging.INFO, **opts):
""" """
@ -172,13 +187,10 @@ class Simulation:
logger.info(dedent(f''' logger.info(dedent(f'''
Model stats: Model stats:
Agents (total: { model.schedule.get_agent_count() }): Agents (total: { model.schedule.get_agent_count() }):
- { (newline + ' - ').join(str(a) for a in model.schedule.agents) }''' - { (newline + ' - ').join(str(a) for a in model.schedule.agents) }
f'''
Topologies (size): Topology size: { len(model.G) if hasattr(model, "G") else 0 }
- { dict( (k, len(v)) for (k, v) in model.topologies.items()) } '''))
''' if getattr(model, "topologies", None) else ''
))
while not is_done(): while not is_done():
utils.logger.debug(f'Simulation time {model.schedule.time}/{until}. Next: {getattr(model.schedule, "next_time", model.schedule.time + self.interval)}') utils.logger.debug(f'Simulation time {model.schedule.time}/{until}. Next: {getattr(model.schedule, "next_time", model.schedule.time + self.interval)}')
@ -198,14 +210,14 @@ f'''
return yaml.dump(self.to_dict()) return yaml.dump(self.to_dict())
def iter_from_config(*cfgs): def iter_from_config(*cfgs, **kwargs):
for config in cfgs: for config in cfgs:
configs = list(serialization.load_config(config)) configs = list(serialization.load_config(config))
for config, path in configs: for config, path in configs:
d = dict(config) d = dict(config)
if 'dir_path' not in d: if 'dir_path' not in d:
d['dir_path'] = os.path.dirname(path) d['dir_path'] = os.path.dirname(path)
yield Simulation.from_dict(d) yield Simulation.from_dict(d, **kwargs)
def from_config(conf_or_path): def from_config(conf_or_path):

View File

@ -76,7 +76,7 @@ class TimedActivation(BaseScheduler):
agent = self._agents[agent_id] agent = self._agents[agent_id]
returned = agent.step() returned = agent.step()
if not agent.alive: if not getattr(agent, 'alive', True):
self.remove(agent) self.remove(agent)
continue continue

View File

@ -20,10 +20,11 @@ else:
logformat = "[%(levelname)-5.5s][%(asctime)s] %(message)s" logformat = "[%(levelname)-5.5s][%(asctime)s] %(message)s"
logFormatter = logging.Formatter(logformat, timeformat) logFormatter = logging.Formatter(logformat, timeformat)
consoleHandler = logging.StreamHandler() consoleHandler = logging.StreamHandler()
consoleHandler.setFormatter(logFormatter) consoleHandler.setFormatter(logFormatter)
logger.addHandler(consoleHandler)
logging.basicConfig(level=logging.INFO,
handlers=[consoleHandler,])
@contextmanager @contextmanager

View File

@ -9,17 +9,16 @@ interval: 1
seed: "CompleteSeed!" seed: "CompleteSeed!"
model_class: Environment model_class: Environment
model_params: model_params:
topologies: topology:
default: params:
params: generator: complete_graph
generator: complete_graph n: 4
n: 4
agents: agents:
agent_class: CounterModel agent_class: CounterModel
state: state:
group: network group: network
times: 1 times: 1
topology: 'default' topology: true
distribution: distribution:
- agent_class: CounterModel - agent_class: CounterModel
weight: 0.25 weight: 0.25
@ -42,7 +41,7 @@ model_params:
fixed: fixed:
- agent_class: BaseAgent - agent_class: BaseAgent
hidden: true hidden: true
topology: null topology: false
state: state:
name: 'Environment Agent 1' name: 'Environment Agent 1'
times: 10 times: 10

View File

@ -4,12 +4,14 @@ import pytest
from soil import agents, environment from soil import agents, environment
from soil import time as stime from soil import time as stime
class Dead(agents.FSM): class Dead(agents.FSM):
@agents.default_state @agents.default_state
@agents.state @agents.state
def only(self): def only(self):
return self.die() return self.die()
class TestMain(TestCase): class TestMain(TestCase):
def test_die_raises_exception(self): def test_die_raises_exception(self):
d = Dead(unique_id=0, model=environment.Environment()) d = Dead(unique_id=0, model=environment.Environment())
@ -20,5 +22,5 @@ class TestMain(TestCase):
def test_die_returns_infinity(self): def test_die_returns_infinity(self):
d = Dead(unique_id=0, model=environment.Environment()) d = Dead(unique_id=0, model=environment.Environment())
ret = d.step().abs(0) ret = d.step().abs(0)
print(ret, 'next') print(ret, "next")
assert ret == stime.INFINITY assert ret == stime.INFINITY

View File

@ -7,9 +7,9 @@ from os.path import join
from soil import simulation, serialization, config, network, agents, utils from soil import simulation, serialization, config, network, agents, utils
ROOT = os.path.abspath(os.path.dirname(__file__)) ROOT = os.path.abspath(os.path.dirname(__file__))
EXAMPLES = join(ROOT, '..', 'examples') EXAMPLES = join(ROOT, "..", "examples")
FORCE_TESTS = os.environ.get('FORCE_TESTS', '') FORCE_TESTS = os.environ.get("FORCE_TESTS", "")
def isequal(a, b): def isequal(a, b):
@ -24,7 +24,6 @@ def isequal(a, b):
class TestConfig(TestCase): class TestConfig(TestCase):
def test_conversion(self): def test_conversion(self):
expected = serialization.load_file(join(ROOT, "complete_converted.yml"))[0] expected = serialization.load_file(join(ROOT, "complete_converted.yml"))[0]
old = serialization.load_file(join(ROOT, "old_complete.yml"))[0] old = serialization.load_file(join(ROOT, "old_complete.yml"))[0]
@ -38,7 +37,7 @@ class TestConfig(TestCase):
The configuration should not change after running The configuration should not change after running
the simulation. the simulation.
""" """
config = serialization.load_file(join(EXAMPLES, 'complete.yml'))[0] config = serialization.load_file(join(EXAMPLES, "complete.yml"))[0]
s = simulation.from_config(config) s = simulation.from_config(config)
init_config = copy.copy(s.to_dict()) init_config = copy.copy(s.to_dict())
@ -47,11 +46,8 @@ class TestConfig(TestCase):
# del nconfig['to # del nconfig['to
isequal(init_config, nconfig) isequal(init_config, nconfig)
def test_topology_config(self): def test_topology_config(self):
netconfig = config.NetConfig(**{ netconfig = config.NetConfig(**{"path": join(ROOT, "test.gexf")})
'path': join(ROOT, 'test.gexf')
})
net = network.from_config(netconfig, dir_path=ROOT) net = network.from_config(netconfig, dir_path=ROOT)
assert len(net.nodes) == 2 assert len(net.nodes) == 2
assert len(net.edges) == 1 assert len(net.edges) == 1
@ -62,36 +58,33 @@ class TestConfig(TestCase):
network agents are initialized properly. network agents are initialized properly.
""" """
cfg = { cfg = {
'name': 'CounterAgent', "name": "CounterAgent",
'network_params': { "network_params": {"path": join(ROOT, "test.gexf")},
'path': join(ROOT, 'test.gexf') "agent_class": "CounterModel",
},
'agent_class': 'CounterModel',
# 'states': [{'times': 10}, {'times': 20}], # 'states': [{'times': 10}, {'times': 20}],
'max_time': 2, "max_time": 2,
'dry_run': True, "dry_run": True,
'num_trials': 1, "num_trials": 1,
'environment_params': { "environment_params": {},
}
} }
conf = config.convert_old(cfg) conf = config.convert_old(cfg)
s = simulation.from_config(conf) s = simulation.from_config(conf)
env = s.get_env() env = s.get_env()
assert len(env.topologies['default'].nodes) == 2 assert len(env.G.nodes) == 2
assert len(env.topologies['default'].edges) == 1 assert len(env.G.edges) == 1
assert len(env.agents) == 2 assert len(env.agents) == 2
assert env.agents[0].G == env.topologies['default'] assert env.agents[0].G == env.G
def test_agents_from_config(self): def test_agents_from_config(self):
'''We test that the known complete configuration produces """We test that the known complete configuration produces
the right agents in the right groups''' the right agents in the right groups"""
cfg = serialization.load_file(join(ROOT, "complete_converted.yml"))[0] cfg = serialization.load_file(join(ROOT, "complete_converted.yml"))[0]
s = simulation.from_config(cfg) s = simulation.from_config(cfg)
env = s.get_env() env = s.get_env()
assert len(env.topologies['default'].nodes) == 4 assert len(env.G.nodes) == 4
assert len(env.agents(group='network')) == 4 assert len(env.agents(group="network")) == 4
assert len(env.agents(group='environment')) == 1 assert len(env.agents(group="environment")) == 1
def test_yaml(self): def test_yaml(self):
""" """
@ -100,16 +93,17 @@ class TestConfig(TestCase):
Values not present in the original config file should have reasonable Values not present in the original config file should have reasonable
defaults. defaults.
""" """
with utils.timer('loading'): with utils.timer("loading"):
config = serialization.load_file(join(EXAMPLES, 'complete.yml'))[0] config = serialization.load_file(join(EXAMPLES, "complete.yml"))[0]
s = simulation.from_config(config) s = simulation.from_config(config)
with utils.timer('serializing'): with utils.timer("serializing"):
serial = s.to_yaml() serial = s.to_yaml()
with utils.timer('recovering'): with utils.timer("recovering"):
recovered = yaml.load(serial, Loader=yaml.SafeLoader) recovered = yaml.load(serial, Loader=yaml.SafeLoader)
for (k, v) in config.items(): for (k, v) in config.items():
assert recovered[k] == v assert recovered[k] == v
def make_example_test(path, cfg): def make_example_test(path, cfg):
def wrapped(self): def wrapped(self):
root = os.getcwd() root = os.getcwd()
@ -133,18 +127,19 @@ def make_example_test(path, cfg):
# assert env.now <= config['max_time'] # But not further than allowed # assert env.now <= config['max_time'] # But not further than allowed
# except KeyError: # except KeyError:
# pass # pass
return wrapped return wrapped
def add_example_tests(): def add_example_tests():
for config, path in serialization.load_files( for config, path in serialization.load_files(
join(EXAMPLES, '*', '*.yml'), join(EXAMPLES, "*", "*.yml"),
join(EXAMPLES, '*.yml'), join(EXAMPLES, "*.yml"),
): ):
p = make_example_test(path=path, cfg=config) p = make_example_test(path=path, cfg=config)
fname = os.path.basename(path) fname = os.path.basename(path)
p.__name__ = 'test_example_file_%s' % fname p.__name__ = "test_example_file_%s" % fname
p.__doc__ = '%s should be a valid configuration' % fname p.__doc__ = "%s should be a valid configuration" % fname
setattr(TestConfig, p.__name__, p) setattr(TestConfig, p.__name__, p)
del p del p

View File

@ -5,9 +5,9 @@ from os.path import join
from soil import serialization, simulation, config from soil import serialization, simulation, config
ROOT = os.path.abspath(os.path.dirname(__file__)) ROOT = os.path.abspath(os.path.dirname(__file__))
EXAMPLES = join(ROOT, '..', 'examples') EXAMPLES = join(ROOT, "..", "examples")
FORCE_TESTS = os.environ.get('FORCE_TESTS', '') FORCE_TESTS = os.environ.get("FORCE_TESTS", "")
class TestExamples(TestCase): class TestExamples(TestCase):
@ -23,31 +23,31 @@ def make_example_test(path, cfg):
s.max_steps = 100 s.max_steps = 100
s.num_trials = 1 s.num_trials = 1
assert isinstance(cfg, config.Config) assert isinstance(cfg, config.Config)
if getattr(cfg, 'skip_test', False) and not FORCE_TESTS: if getattr(cfg, "skip_test", False) and not FORCE_TESTS:
self.skipTest('Example ignored.') self.skipTest("Example ignored.")
envs = s.run_simulation(dry_run=True) envs = s.run_simulation(dry_run=True)
assert envs assert envs
for env in envs: for env in envs:
assert env assert env
try: try:
n = cfg.model_params['network_params']['n'] n = cfg.model_params["network_params"]["n"]
assert len(list(env.network_agents)) == n assert len(list(env.network_agents)) == n
except KeyError: except KeyError:
pass pass
assert env.schedule.steps > 0 # It has run assert env.schedule.steps > 0 # It has run
assert env.schedule.steps <= s.max_steps # But not further than allowed assert env.schedule.steps <= s.max_steps # But not further than allowed
return wrapped return wrapped
def add_example_tests(): def add_example_tests():
for cfg, path in serialization.load_files( for cfg, path in serialization.load_files(
join(EXAMPLES, '*', '*.yml'), join(EXAMPLES, "**", "*.yml"),
join(EXAMPLES, '*.yml'),
): ):
p = make_example_test(path=path, cfg=config.Config.from_raw(cfg)) p = make_example_test(path=path, cfg=config.Config.from_raw(cfg))
fname = os.path.basename(path) fname = os.path.basename(path)
p.__name__ = 'test_example_file_%s' % fname p.__name__ = "test_example_file_%s" % fname
p.__doc__ = '%s should be a valid configuration' % fname p.__doc__ = "%s should be a valid configuration" % fname
setattr(TestExamples, p.__name__, p) setattr(TestExamples, p.__name__, p)
del p del p

View File

@ -2,6 +2,7 @@ import os
import io import io
import tempfile import tempfile
import shutil import shutil
import sqlite3
from unittest import TestCase from unittest import TestCase
from soil import exporters from soil import exporters
@ -40,14 +41,10 @@ class Exporters(TestCase):
num_trials = 5 num_trials = 5
max_time = 2 max_time = 2
config = { config = {
'name': 'exporter_sim', "name": "exporter_sim",
'model_params': { "model_params": {"agents": [{"agent_class": agents.BaseAgent}]},
'agents': [{ "max_time": max_time,
'agent_class': agents.BaseAgent "num_trials": num_trials,
}]
},
'max_time': max_time,
'num_trials': num_trials,
} }
s = simulation.from_config(config) s = simulation.from_config(config)
@ -64,40 +61,52 @@ class Exporters(TestCase):
assert Dummy.total_time == max_time * num_trials assert Dummy.total_time == max_time * num_trials
def test_writing(self): def test_writing(self):
'''Try to write CSV, sqlite and YAML (without dry_run)''' """Try to write CSV, sqlite and YAML (without dry_run)"""
n_trials = 5 n_trials = 5
config = { config = {
'name': 'exporter_sim', "name": "exporter_sim",
'network_params': { "network_params": {"generator": "complete_graph", "n": 4},
'generator': 'complete_graph', "agent_class": "CounterModel",
'n': 4 "max_time": 2,
}, "num_trials": n_trials,
'agent_class': 'CounterModel', "dry_run": False,
'max_time': 2, "environment_params": {},
'num_trials': n_trials,
'dry_run': False,
'environment_params': {}
} }
output = io.StringIO() output = io.StringIO()
s = simulation.from_config(config) s = simulation.from_config(config)
tmpdir = tempfile.mkdtemp() tmpdir = tempfile.mkdtemp()
envs = s.run_simulation(exporters=[ envs = s.run_simulation(
exporters.default, exporters=[
exporters.csv, exporters.default,
], exporters.csv,
dry_run=False, ],
outdir=tmpdir, model_params={
exporter_params={'copy_to': output}) "agent_reporters": {"times": "times"},
"model_reporters": {
"constant": lambda x: 1,
},
},
dry_run=False,
outdir=tmpdir,
exporter_params={"copy_to": output},
)
result = output.getvalue() result = output.getvalue()
simdir = os.path.join(tmpdir, s.group or '', s.name) simdir = os.path.join(tmpdir, s.group or "", s.name)
with open(os.path.join(simdir, '{}.dumped.yml'.format(s.name))) as f: with open(os.path.join(simdir, "{}.dumped.yml".format(s.name))) as f:
result = f.read() result = f.read()
assert result assert result
try: try:
for e in envs: for e in envs:
with open(os.path.join(simdir, '{}.env.csv'.format(e.id))) as f: db = sqlite3.connect(os.path.join(simdir, f"{e.id}.sqlite"))
cur = db.cursor()
agent_entries = cur.execute("SELECT * from agents").fetchall()
env_entries = cur.execute("SELECT * from env").fetchall()
assert len(agent_entries) > 0
assert len(env_entries) > 0
with open(os.path.join(simdir, "{}.env.csv".format(e.id))) as f:
result = f.read() result = f.read()
assert result assert result
finally: finally:

View File

@ -6,60 +6,55 @@ import networkx as nx
from functools import partial from functools import partial
from os.path import join from os.path import join
from soil import (simulation, Environment, agents, network, serialization, from soil import simulation, Environment, agents, network, serialization, utils, config
utils, config)
from soil.time import Delta from soil.time import Delta
ROOT = os.path.abspath(os.path.dirname(__file__)) ROOT = os.path.abspath(os.path.dirname(__file__))
EXAMPLES = join(ROOT, '..', 'examples') EXAMPLES = join(ROOT, "..", "examples")
class CustomAgent(agents.FSM, agents.NetworkAgent): class CustomAgent(agents.FSM, agents.NetworkAgent):
@agents.default_state @agents.default_state
@agents.state @agents.state
def normal(self): def normal(self):
self.neighbors = self.count_agents(state_id='normal', self.neighbors = self.count_agents(state_id="normal", limit_neighbors=True)
limit_neighbors=True)
@agents.state @agents.state
def unreachable(self): def unreachable(self):
return return
class TestMain(TestCase): class TestMain(TestCase):
def test_empty_simulation(self): def test_empty_simulation(self):
"""A simulation with a base behaviour should do nothing""" """A simulation with a base behaviour should do nothing"""
config = { config = {
'model_params': { "model_params": {
'network_params': { "network_params": {"path": join(ROOT, "test.gexf")},
'path': join(ROOT, 'test.gexf') "agent_class": "BaseAgent",
},
'agent_class': 'BaseAgent',
} }
} }
s = simulation.from_config(config) s = simulation.from_config(config)
s.run_simulation(dry_run=True) s.run_simulation(dry_run=True)
def test_network_agent(self): def test_network_agent(self):
""" """
The initial states should be applied to the agent and the The initial states should be applied to the agent and the
agent should be able to update its state.""" agent should be able to update its state."""
config = { config = {
'name': 'CounterAgent', "name": "CounterAgent",
'num_trials': 1, "num_trials": 1,
'max_time': 2, "max_time": 2,
'model_params': { "model_params": {
'network_params': { "network_params": {
'generator': nx.complete_graph, "generator": nx.complete_graph,
'n': 2, "n": 2,
}, },
'agent_class': 'CounterModel', "agent_class": "CounterModel",
'states': { "states": {
0: {'times': 10}, 0: {"times": 10},
1: {'times': 20}, 1: {"times": 20},
}, },
} },
} }
s = simulation.from_config(config) s = simulation.from_config(config)
@ -68,48 +63,41 @@ class TestMain(TestCase):
The initial states should be applied to the agent and the The initial states should be applied to the agent and the
agent should be able to update its state.""" agent should be able to update its state."""
config = { config = {
'version': '2', "version": "2",
'name': 'CounterAgent', "name": "CounterAgent",
'dry_run': True, "dry_run": True,
'num_trials': 1, "num_trials": 1,
'max_time': 2, "max_time": 2,
'model_params': { "model_params": {
'topologies': { "topology": {"path": join(ROOT, "test.gexf")},
'default': { "agents": {
'path': join(ROOT, 'test.gexf') "agent_class": "CounterModel",
} "topology": True,
"fixed": [{"state": {"times": 10}}, {"state": {"times": 20}}],
}, },
'agents': { },
'agent_class': 'CounterModel',
'topology': 'default',
'fixed': [{'state': {'times': 10}}, {'state': {'times': 20}}],
}
}
} }
s = simulation.from_config(config) s = simulation.from_config(config)
env = s.get_env() env = s.get_env()
assert isinstance(env.agents[0], agents.CounterModel) assert isinstance(env.agents[0], agents.CounterModel)
assert env.agents[0].G == env.topologies['default'] assert env.agents[0].G == env.G
assert env.agents[0]['times'] == 10 assert env.agents[0]["times"] == 10
assert env.agents[0]['times'] == 10 assert env.agents[0]["times"] == 10
env.step() env.step()
assert env.agents[0]['times'] == 11 assert env.agents[0]["times"] == 11
assert env.agents[1]['times'] == 21 assert env.agents[1]["times"] == 21
def test_init_and_count_agents(self): def test_init_and_count_agents(self):
"""Agents should be properly initialized and counting should filter them properly""" """Agents should be properly initialized and counting should filter them properly"""
#TODO: separate this test into two or more test cases # TODO: separate this test into two or more test cases
config = { config = {
'max_time': 10, "max_time": 10,
'model_params': { "model_params": {
'agents': [{'agent_class': CustomAgent, 'weight': 1, 'topology': 'default'}, "agents": [
{'agent_class': CustomAgent, 'weight': 3, 'topology': 'default'}, {"agent_class": CustomAgent, "weight": 1, "topology": True},
{"agent_class": CustomAgent, "weight": 3, "topology": True},
], ],
'topologies': { "topology": {"path": join(ROOT, "test.gexf")},
'default': {
'path': join(ROOT, 'test.gexf')
}
},
}, },
} }
s = simulation.from_config(config) s = simulation.from_config(config)
@ -120,40 +108,45 @@ class TestMain(TestCase):
assert env.count_agents(weight=3) == 1 assert env.count_agents(weight=3) == 1
assert env.count_agents(agent_class=CustomAgent) == 2 assert env.count_agents(agent_class=CustomAgent) == 2
def test_torvalds_example(self): def test_torvalds_example(self):
"""A complete example from a documentation should work.""" """A complete example from a documentation should work."""
config = serialization.load_file(join(EXAMPLES, 'torvalds.yml'))[0] config = serialization.load_file(join(EXAMPLES, "torvalds.yml"))[0]
config['model_params']['network_params']['path'] = join(EXAMPLES, config["model_params"]["network_params"]["path"] = join(
config['model_params']['network_params']['path']) EXAMPLES, config["model_params"]["network_params"]["path"]
)
s = simulation.from_config(config) s = simulation.from_config(config)
env = s.run_simulation(dry_run=True)[0] env = s.run_simulation(dry_run=True)[0]
for a in env.network_agents: for a in env.network_agents:
skill_level = a.state['skill_level'] skill_level = a.state["skill_level"]
if a.id == 'Torvalds': if a.id == "Torvalds":
assert skill_level == 'God' assert skill_level == "God"
assert a.state['total'] == 3 assert a.state["total"] == 3
assert a.state['neighbors'] == 2 assert a.state["neighbors"] == 2
elif a.id == 'balkian': elif a.id == "balkian":
assert skill_level == 'developer' assert skill_level == "developer"
assert a.state['total'] == 3 assert a.state["total"] == 3
assert a.state['neighbors'] == 1 assert a.state["neighbors"] == 1
else: else:
assert skill_level == 'beginner' assert skill_level == "beginner"
assert a.state['total'] == 3 assert a.state["total"] == 3
assert a.state['neighbors'] == 1 assert a.state["neighbors"] == 1
def test_serialize_class(self): def test_serialize_class(self):
ser, name = serialization.serialize(agents.BaseAgent, known_modules=[]) ser, name = serialization.serialize(agents.BaseAgent, known_modules=[])
assert name == 'soil.agents.BaseAgent' assert name == "soil.agents.BaseAgent"
assert ser == agents.BaseAgent assert ser == agents.BaseAgent
ser, name = serialization.serialize(agents.BaseAgent, known_modules=['soil', ]) ser, name = serialization.serialize(
assert name == 'BaseAgent' agents.BaseAgent,
known_modules=[
"soil",
],
)
assert name == "BaseAgent"
assert ser == agents.BaseAgent assert ser == agents.BaseAgent
ser, name = serialization.serialize(CustomAgent) ser, name = serialization.serialize(CustomAgent)
assert name == 'test_main.CustomAgent' assert name == "test_main.CustomAgent"
assert ser == CustomAgent assert ser == CustomAgent
pickle.dumps(ser) pickle.dumps(ser)
@ -166,72 +159,63 @@ class TestMain(TestCase):
assert i == des assert i == des
def test_serialize_agent_class(self): def test_serialize_agent_class(self):
'''A class from soil.agents should be serialized without the module part''' """A class from soil.agents should be serialized without the module part"""
ser = agents.serialize_type(CustomAgent) ser = agents.serialize_type(CustomAgent)
assert ser == 'test_main.CustomAgent' assert ser == "test_main.CustomAgent"
ser = agents.serialize_type(agents.BaseAgent) ser = agents.serialize_type(agents.BaseAgent)
assert ser == 'BaseAgent' assert ser == "BaseAgent"
pickle.dumps(ser) pickle.dumps(ser)
def test_deserialize_agent_distribution(self): def test_deserialize_agent_distribution(self):
agent_distro = [ agent_distro = [
{ {"agent_class": "CounterModel", "weight": 1},
'agent_class': 'CounterModel', {"agent_class": "test_main.CustomAgent", "weight": 2},
'weight': 1
},
{
'agent_class': 'test_main.CustomAgent',
'weight': 2
},
] ]
converted = agents.deserialize_definition(agent_distro) converted = agents.deserialize_definition(agent_distro)
assert converted[0]['agent_class'] == agents.CounterModel assert converted[0]["agent_class"] == agents.CounterModel
assert converted[1]['agent_class'] == CustomAgent assert converted[1]["agent_class"] == CustomAgent
pickle.dumps(converted) pickle.dumps(converted)
def test_serialize_agent_distribution(self): def test_serialize_agent_distribution(self):
agent_distro = [ agent_distro = [
{ {"agent_class": agents.CounterModel, "weight": 1},
'agent_class': agents.CounterModel, {"agent_class": CustomAgent, "weight": 2},
'weight': 1
},
{
'agent_class': CustomAgent,
'weight': 2
},
] ]
converted = agents.serialize_definition(agent_distro) converted = agents.serialize_definition(agent_distro)
assert converted[0]['agent_class'] == 'CounterModel' assert converted[0]["agent_class"] == "CounterModel"
assert converted[1]['agent_class'] == 'test_main.CustomAgent' assert converted[1]["agent_class"] == "test_main.CustomAgent"
pickle.dumps(converted) pickle.dumps(converted)
def test_templates(self): def test_templates(self):
'''Loading a template should result in several configs''' """Loading a template should result in several configs"""
configs = serialization.load_file(join(EXAMPLES, 'template.yml')) configs = serialization.load_file(join(EXAMPLES, "template.yml"))
assert len(configs) > 0 assert len(configs) > 0
def test_until(self): def test_until(self):
config = { config = {
'name': 'until_sim', "name": "until_sim",
'model_params': { "model_params": {
'network_params': {}, "network_params": {},
'agents': { "agents": {
'fixed': [{ "fixed": [
'agent_class': agents.BaseAgent, {
}] "agent_class": agents.BaseAgent,
}
]
}, },
}, },
'max_time': 2, "max_time": 2,
'num_trials': 50, "num_trials": 50,
} }
s = simulation.from_config(config) s = simulation.from_config(config)
runs = list(s.run_simulation(dry_run=True)) runs = list(s.run_simulation(dry_run=True))
over = list(x.now for x in runs if x.now > 2) over = list(x.now for x in runs if x.now > 2)
assert len(runs) == config['num_trials'] assert len(runs) == config["num_trials"]
assert len(over) == 0 assert len(over) == 0
def test_fsm(self): def test_fsm(self):
'''Basic state change''' """Basic state change"""
class ToggleAgent(agents.FSM): class ToggleAgent(agents.FSM):
@agents.default_state @agents.default_state
@agents.state @agents.state
@ -250,7 +234,8 @@ class TestMain(TestCase):
assert a.state_id == a.ping.id assert a.state_id == a.ping.id
def test_fsm_when(self): def test_fsm_when(self):
'''Basic state change''' """Basic state change"""
class ToggleAgent(agents.FSM): class ToggleAgent(agents.FSM):
@agents.default_state @agents.default_state
@agents.state @agents.state

View File

@ -1,4 +1,4 @@
''' """
Mesa-SOIL integration tests Mesa-SOIL integration tests
We have to test that: We have to test that:
@ -8,13 +8,15 @@ We have to test that:
- Mesa visualizations work with SOIL simulations - Mesa visualizations work with SOIL simulations
''' """
from mesa import Agent, Model from mesa import Agent, Model
from mesa.time import RandomActivation from mesa.time import RandomActivation
from mesa.space import MultiGrid from mesa.space import MultiGrid
class MoneyAgent(Agent): class MoneyAgent(Agent):
""" An agent with fixed initial wealth.""" """An agent with fixed initial wealth."""
def __init__(self, unique_id, model): def __init__(self, unique_id, model):
super().__init__(unique_id, model) super().__init__(unique_id, model)
self.wealth = 1 self.wealth = 1
@ -33,15 +35,15 @@ class MoneyAgent(Agent):
def move(self): def move(self):
possible_steps = self.model.grid.get_neighborhood( possible_steps = self.model.grid.get_neighborhood(
self.pos, self.pos, moore=True, include_center=False
moore=True, )
include_center=False)
new_position = self.random.choice(possible_steps) new_position = self.random.choice(possible_steps)
self.model.grid.move_agent(self, new_position) self.model.grid.move_agent(self, new_position)
class MoneyModel(Model): class MoneyModel(Model):
"""A model with some number of agents.""" """A model with some number of agents."""
def __init__(self, N, width, height): def __init__(self, N, width, height):
self.num_agents = N self.num_agents = N
self.grid = MultiGrid(width, height, True) self.grid = MultiGrid(width, height, True)
@ -58,7 +60,7 @@ class MoneyModel(Model):
self.grid.place_agent(a, (x, y)) self.grid.place_agent(a, (x, y))
def step(self): def step(self):
'''Advance the model by one step.''' """Advance the model by one step."""
self.schedule.step() self.schedule.step()

View File

@ -10,7 +10,7 @@ from soil import config, network, environment, agents, simulation
from test_main import CustomAgent from test_main import CustomAgent
ROOT = os.path.abspath(os.path.dirname(__file__)) ROOT = os.path.abspath(os.path.dirname(__file__))
EXAMPLES = join(ROOT, '..', 'examples') EXAMPLES = join(ROOT, "..", "examples")
class TestNetwork(TestCase): class TestNetwork(TestCase):
@ -19,21 +19,13 @@ class TestNetwork(TestCase):
Load a graph from file if the extension is known. Load a graph from file if the extension is known.
Raise an exception otherwise. Raise an exception otherwise.
""" """
config = { config = {"network_params": {"path": join(ROOT, "test.gexf")}}
'network_params': { G = network.from_config(config["network_params"])
'path': join(ROOT, 'test.gexf')
}
}
G = network.from_config(config['network_params'])
assert G assert G
assert len(G) == 2 assert len(G) == 2
with self.assertRaises(AttributeError): with self.assertRaises(AttributeError):
config = { config = {"network_params": {"path": join(ROOT, "unknown.extension")}}
'network_params': { G = network.from_config(config["network_params"])
'path': join(ROOT, 'unknown.extension')
}
}
G = network.from_config(config['network_params'])
print(G) print(G)
def test_generate_barabasi(self): def test_generate_barabasi(self):
@ -41,88 +33,73 @@ class TestNetwork(TestCase):
If no path is given, a generator and network parameters If no path is given, a generator and network parameters
should be used to generate a network should be used to generate a network
""" """
cfg = { cfg = {"params": {"generator": "barabasi_albert_graph"}}
'params': {
'generator': 'barabasi_albert_graph'
}
}
with self.assertRaises(Exception): with self.assertRaises(Exception):
G = network.from_config(cfg) G = network.from_config(cfg)
cfg['params']['n'] = 100 cfg["params"]["n"] = 100
cfg['params']['m'] = 10 cfg["params"]["m"] = 10
G = network.from_config(cfg) G = network.from_config(cfg)
assert len(G) == 100 assert len(G) == 100
def test_save_geometric(self): def test_save_geometric(self):
""" """
There is a bug in networkx that prevents it from creating a GEXF file There is a bug in networkx that prevents it from creating a GEXF file
from geometric models. We should work around it. from geometric models. We should work around it.
""" """
G = nx.random_geometric_graph(20, 0.1) G = nx.random_geometric_graph(20, 0.1)
env = environment.NetworkEnvironment(topology=G) env = environment.NetworkEnvironment(topology=G)
f = io.BytesIO() f = io.BytesIO()
assert env.topologies['default'] assert env.G
network.dump_gexf(env.topologies['default'], f) network.dump_gexf(env.G, f)
def test_networkenvironment_creation(self): def test_networkenvironment_creation(self):
"""Networkenvironment should accept netconfig as parameters""" """Networkenvironment should accept netconfig as parameters"""
model_params = { model_params = {
'topologies': { "topology": {"path": join(ROOT, "test.gexf")},
'default': { "agents": {
'path': join(ROOT, 'test.gexf') "topology": True,
} "distribution": [
{
"agent_class": CustomAgent,
}
],
}, },
'agents': {
'topology': 'default',
'distribution': [{
'agent_class': CustomAgent,
}]
}
} }
env = environment.Environment(**model_params) env = environment.Environment(**model_params)
assert env.topologies assert env.G
env.step() env.step()
assert len(env.topologies['default']) == 2 assert len(env.G) == 2
assert len(env.agents) == 2 assert len(env.agents) == 2
assert env.agents[1].count_agents(state_id='normal') == 2 assert env.agents[1].count_agents(state_id="normal") == 2
assert env.agents[1].count_agents(state_id='normal', limit_neighbors=True) == 1 assert env.agents[1].count_agents(state_id="normal", limit_neighbors=True) == 1
assert env.agents[0].neighbors == 1 assert env.agents[0].neighbors == 1
def test_custom_agent_neighbors(self): def test_custom_agent_neighbors(self):
"""Allow for search of neighbors with a certain state_id""" """Allow for search of neighbors with a certain state_id"""
config = { config = {
'model_params': { "model_params": {
'topologies': { "topology": {"path": join(ROOT, "test.gexf")},
'default': { "agents": {
'path': join(ROOT, 'test.gexf') "topology": True,
} "distribution": [{"weight": 1, "agent_class": CustomAgent}],
}, },
'agents': {
'topology': 'default',
'distribution': [
{
'weight': 1,
'agent_class': CustomAgent
}
]
}
}, },
'max_time': 10, "max_time": 10,
} }
s = simulation.from_config(config) s = simulation.from_config(config)
env = s.run_simulation(dry_run=True)[0] env = s.run_simulation(dry_run=True)[0]
assert env.agents[1].count_agents(state_id='normal') == 2 assert env.agents[1].count_agents(state_id="normal") == 2
assert env.agents[1].count_agents(state_id='normal', limit_neighbors=True) == 1 assert env.agents[1].count_agents(state_id="normal", limit_neighbors=True) == 1
assert env.agents[0].neighbors == 1 assert env.agents[0].neighbors == 1
def test_subgraph(self): def test_subgraph(self):
'''An agent should be able to subgraph the global topology''' """An agent should be able to subgraph the global topology"""
G = nx.Graph() G = nx.Graph()
G.add_node(3) G.add_node(3)
G.add_edge(1, 2) G.add_edge(1, 2)
distro = agents.calculate_distribution(agent_class=agents.NetworkAgent) distro = agents.calculate_distribution(agent_class=agents.NetworkAgent)
aconfig = config.AgentConfig(distribution=distro, topology='default') aconfig = config.AgentConfig(distribution=distro, topology=True)
env = environment.Environment(name='Test', topologies={'default': G}, agents=aconfig) env = environment.Environment(name="Test", topology=G, agents=aconfig)
lst = list(env.network_agents) lst = list(env.network_agents)
a2 = env.find_one(node_id=2) a2 = env.find_one(node_id=2)