1
0
mirror of https://github.com/gsi-upm/soil synced 2025-09-14 04:02:21 +00:00

Compare commits

...

13 Commits

Author SHA1 Message Date
J. Fernando Sánchez
65f6aa72f3 fix timeout in FSM. Improve logs 2019-02-01 19:05:07 +01:00
J. Fernando Sánchez
09e14c6e84 Add generator and programmatic examples 2018-12-20 19:25:33 +01:00
J. Fernando Sánchez
8593ac999d Swap test and build in CI. Remove tests in tags 2018-12-20 17:56:33 +01:00
J. Fernando Sánchez
90338c3549 skip-tls-verify in kaniko 2018-12-20 17:48:58 +01:00
J. Fernando Sánchez
1d532dacfe Remove entrypoint build stage 2018-12-20 15:14:58 +01:00
J. Fernando Sánchez
a1f8d8c9c5 Change entrypoint build stage 2018-12-20 15:07:45 +01:00
J. Fernando Sánchez
de326eb331 Remove CI global image 2018-12-20 15:05:45 +01:00
J. Fernando Sánchez
04b4380c61 Fix wrong import soil.web 2018-12-20 14:06:18 +01:00
J. Fernando Sánchez
d70a0c865c limit ci jobs to docker runners 2018-12-09 17:22:40 +01:00
J. Fernando Sánchez
625c28e4ee Fix CI syntax 2018-12-09 17:09:31 +01:00
J. Fernando Sánchez
9749f4ca14 Fix multithreading
Multithreading needs pickling to work.
Pickling/unpickling didn't work in some situations, like when the
environment_agents parameter was left blank.
This was due to two reasons:

1) agents and history didn't have a setstate method, and some of their
attributes cannot be pickled (generators, sqlite connection)
2) the environment was adding generators (agents) to its state.

This fixes the situation by restricting the keys that the environment exports
when it pickles, and by adding the set/getstate methods in agents.

The resulting pickles should contain enough information to inspect
them (history, state values, etc), but very limited.
2018-12-09 16:58:49 +01:00
J. Fernando Sánchez
3526fa29d7 Fix bug parallel 2018-12-09 14:06:50 +01:00
J. Fernando Sánchez
53604c1e66 Fix quickstart.rst markdown code 2018-12-09 13:10:00 +01:00
19 changed files with 341 additions and 85 deletions

View File

@@ -1,21 +1,28 @@
image: python:3.7
steps:
- build
stages:
- test
- build
build:
stage: build
image:
name: gcr.io/kaniko-project/executor:debug
entrypoint: [""]
tags:
- docker
script:
- echo "{\"auths\":{\"$CI_REGISTRY\":{\"username\":\"$CI_REGISTRY_USER\",\"password\":\"$CI_REGISTRY_PASSWORD\"}}}" > /kaniko/.docker/config.json
- /kaniko/executor --context $CI_PROJECT_DIR --dockerfile $CI_PROJECT_DIR/Dockerfile --destination $CI_REGISTRY_IMAGE:$CI_COMMIT_TAG
# The skip-tls-verify flag is there because our registry certificate is self signed
- /kaniko/executor --context $CI_PROJECT_DIR --skip-tls-verify --dockerfile $CI_PROJECT_DIR/Dockerfile --destination $CI_REGISTRY_IMAGE:$CI_COMMIT_TAG
only:
- tags
test:
except:
- tags # Avoid running tests for tags, because they are already run for the branch
tags:
- docker
image: python:3.7
stage: test
script:
python setup.py test
- python setup.py test

19
CHANGELOG.md Normal file
View File

@@ -0,0 +1,19 @@
# Changelog
All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## [Unreleased]
### Changed
### Added
### Fixed
## [0.13.7]
### Changed
* History now defaults to not backing up! This makes it more intuitive to load the history for examination, at the expense of rewriting something. That should not happen because History is only created in the Environment, and that has `backup=True`.
### Added
* Agent names are assigned based on their agent types
* Agent logging uses the agent name.
* FSM agents can now return a timeout in addition to a new state. e.g. `return self.idle, self.env.timeout(2)` will execute the *different_state* in 2 *units of time* (`t_step=now+2`).
* Example of using timeouts in FSM (custom_timeouts)
* `network_agents` entries may include an `ids` entry. If set, it should be a list of node ids that should be assigned that agent type. This complements the previous behavior of setting agent type with `weights`.

View File

@@ -16,15 +16,14 @@ The configuration includes things such as number of agents to use and their type
Soil includes several agent classes in the ``soil.agents`` module, and we will use them in this quickstart.
If you are interested in developing your own agents classes, see :doc:`soil_tutorial`.
The configuration is the following:
Configuration
=============
To get you started, we will use this configuration (:download:`download the file <quickstart.yml>` directly):
.. literalinclude:: quickstart.yml
:language: yaml
Configuration
=============
You may :download:`download the file <quickstart.yml>` directly.
The agent type used, SISa, is a very simple model.
It only has three states (neutral, content and discontent),
Its parameters are the probabilities to change from one state to another, either spontaneously or because of contagion from neighboring agents.
@@ -79,16 +78,16 @@ Change some of the parameters, such as the number of agents, the probability of
Soil also includes a web server that allows you to upload your simulations, change parameters, and visualize the results, including a timeline of the network.
To make it work, you have to install soil like this:
```
pip install soil[web]
```
.. code::
pip install soil[web]
Once installed, the soil web UI can be run in two ways:
```
soil-web
.. code::
OR
soil-web
python -m soil.web
```
# OR
python -m soil.web

View File

@@ -26,7 +26,7 @@ But before that, let's import the soil module and networkx.
%autoreload 2
%pylab inline
# To display plots in the notebooed_
# To display plots in the notebook_
.. parsed-literal::

View File

@@ -0,0 +1,17 @@
---
name: custom-generator
description: Using a custom generator for the network
num_trials: 3
dry_run: True
max_time: 100
interval: 1
network_params:
generator: mymodule.mygenerator
# These are custom parameters
n: 10
n_edges: 5
network_agents:
- agent_type: CounterModel
weight: 1
state:
id: 0

View File

@@ -0,0 +1,27 @@
from networkx import Graph
import networkx as nx
from random import choice
def mygenerator(n=5, n_edges=5):
'''
Just a simple generator that creates a network with n nodes and
n_edges edges. Edges are assigned randomly, only avoiding self loops.
'''
G = nx.Graph()
for i in range(n):
G.add_node(i)
for i in range(n_edges):
nodes = list(G.nodes)
n_in = choice(nodes)
nodes.remove(n_in) # Avoid loops
n_out = choice(nodes)
G.add_edge(n_in, n_out)
return G

View File

@@ -0,0 +1,36 @@
from soil.agents import FSM, state, default_state
class Fibonacci(FSM):
'''Agent that only executes in t_steps that are Fibonacci numbers'''
defaults = {
'prev': 1
}
@default_state
@state
def counting(self):
self.log('Stopping at {}'.format(self.now))
prev, self['prev'] = self['prev'], max([self.now, self['prev']])
return None, self.env.timeout(prev)
class Odds(FSM):
'''Agent that only executes in odd t_steps'''
@default_state
@state
def odds(self):
self.log('Stopping at {}'.format(self.now))
return None, self.env.timeout(1+self.now%2)
if __name__ == '__main__':
import logging
logging.basicConfig(level=logging.INFO)
from soil import Simulation
s = Simulation(network_agents=[{'ids': [0], 'agent_type': Fibonacci},
{'ids': [1], 'agent_type': Odds}],
dry_run=True,
network_params={"generator": "complete_graph", "n": 2},
max_time=100,
)
s.run()

1
examples/programmatic/.gitignore vendored Normal file
View File

@@ -0,0 +1 @@
Programmatic*

View File

@@ -0,0 +1,38 @@
'''
Example of a fully programmatic simulation, without definition files.
'''
from soil import Simulation, agents
from networkx import Graph
import logging
def mygenerator():
# Add only a node
G = Graph()
G.add_node(1)
return G
class MyAgent(agents.FSM):
@agents.default_state
@agents.state
def neutral(self):
self.info('I am running')
s = Simulation(name='Programmatic',
network_params={'generator': mygenerator},
num_trials=1,
max_time=100,
agent_type=MyAgent,
dry_run=True)
logging.basicConfig(level=logging.INFO)
envs = s.run()
s.dump_yaml()
for env in envs:
env.dump_csv()

View File

@@ -1 +1 @@
0.13.0
0.13.7

View File

@@ -11,11 +11,10 @@ try:
except NameError:
basestring = str
logging.basicConfig()
from . import agents
from .simulation import *
from .environment import Environment
from .history import History
from . import utils
from . import analysis
@@ -23,6 +22,9 @@ def main():
import argparse
from . import simulation
logging.basicConfig(level=logging.INFO)
logging.info('Running SOIL version: {}'.format(__version__))
parser = argparse.ArgumentParser(description='Run a SOIL simulation')
parser.add_argument('file', type=str,
nargs="?",
@@ -62,7 +64,7 @@ def main():
simulation.run_from_config(args.file,
dry_run=args.dry_run,
dump=dump,
parallel=(not args.synchronous and not args.pdb),
parallel=(not args.synchronous),
results_dir=args.output)
except Exception:
if args.pdb:

View File

@@ -24,20 +24,16 @@ class BaseAgent(nxsim.BaseAgent):
defaults = {}
def __init__(self, environment, agent_id=None, state=None,
name='network_process', interval=None, **state_params):
def __init__(self, environment, agent_id, state=None,
name=None, interval=None, **state_params):
# Check for REQUIRED arguments
assert environment is not None, TypeError('__init__ missing 1 required keyword argument: \'environment\'. '
'Cannot be NoneType.')
# Initialize agent parameters
self.id = agent_id
self.name = name
self.name = name or '{}[{}]'.format(type(self).__name__, self.id)
self.state_params = state_params
# Global parameters
self.global_topology = environment.G
self.environment_params = environment.environment_params
# Register agent to environment
self.env = environment
@@ -50,8 +46,8 @@ class BaseAgent(nxsim.BaseAgent):
if not hasattr(self, 'level'):
self.level = logging.DEBUG
self.logger = logging.getLogger('{}-Agent-{}'.format(self.env.name,
self.id))
self.logger = logging.getLogger('{}.{}'.format(self.env.name,
self.id))
self.logger.setLevel(self.level)
# initialize every time an instance of the agent is created
@@ -73,6 +69,18 @@ class BaseAgent(nxsim.BaseAgent):
for k, v in value.items():
self[k] = v
@property
def global_topology(self):
return self.env.G
@property
def environment_params(self):
return self.env.environment_params
@environment_params.setter
def environment_params(self, value):
self.env.environment_params = value
def __getitem__(self, key):
if isinstance(key, tuple):
key, t_step = key
@@ -126,9 +134,6 @@ class BaseAgent(nxsim.BaseAgent):
def step(self):
pass
def to_json(self):
return json.dumps(self.state)
def count_agents(self, state_id=None, limit_neighbors=False):
if limit_neighbors:
agents = self.global_topology.neighbors(self.id)
@@ -169,7 +174,7 @@ class BaseAgent(nxsim.BaseAgent):
def log(self, message, *args, level=logging.INFO, **kwargs):
message = message + " ".join(str(i) for i in args)
message = "\t@{:>5}:\t{}".format(self.now, message)
message = "\t{:10}@{:>5}:\t{}".format(self.name, self.now, message)
for k, v in kwargs:
message += " {k}={v} ".format(k, v)
extra = {}
@@ -183,6 +188,26 @@ class BaseAgent(nxsim.BaseAgent):
def info(self, *args, **kwargs):
return self.log(*args, level=logging.INFO, **kwargs)
def __getstate__(self):
'''
Serializing an agent will lose all its running information (you cannot
serialize an iterator), but it keeps the state and link to the environment,
so it can be used for inspection and dumping to a file
'''
state = {}
state['id'] = self.id
state['environment'] = self.env
state['_state'] = self._state
return state
def __setstate__(self, state):
'''
Get back a serialized agent and try to re-compose it
'''
self.id = state['id']
self._state = state['_state']
self.env = state['environment']
def state(func):
'''
@@ -255,7 +280,7 @@ class FSM(BaseAgent, metaclass=MetaFSM):
raise Exception('{} has no valid state id or default state'.format(self))
if next_state not in self.states:
raise Exception('{} is not a valid id for {}'.format(next_state, self))
self.states[next_state](self)
return self.states[next_state](self)
def set_state(self, state):
if hasattr(state, 'id'):
@@ -281,6 +306,9 @@ def prob(prob=1):
return r < prob
STATIC_THRESHOLD = (-1, -1)
def calculate_distribution(network_agents=None,
agent_type=None):
'''
@@ -318,6 +346,9 @@ def calculate_distribution(network_agents=None,
total = sum(x.get('weight', 1) for x in network_agents)
acc = 0
for v in network_agents:
if 'ids' in v:
v['threshold'] = STATIC_THRESHOLD
continue
upper = acc + (v.get('weight', 1)/total)
v['threshold'] = [acc, upper]
acc = upper
@@ -336,7 +367,7 @@ def serialize_distribution(network_agents, known_modules=[]):
When serializing an agent distribution, remove the thresholds, in order
to avoid cluttering the YAML definition file.
'''
d = deepcopy(network_agents)
d = deepcopy(list(network_agents))
for v in d:
if 'threshold' in v:
del v['threshold']
@@ -378,17 +409,20 @@ def _convert_agent_types(ind, to_string=False, **kwargs):
return deserialize_distribution(ind, **kwargs)
def _agent_from_distribution(distribution, value=-1):
def _agent_from_distribution(distribution, value=-1, agent_id=None):
"""Used in the initialization of agents given an agent distribution."""
if value < 0:
value = random.random()
for d in distribution:
for d in sorted(distribution, key=lambda x: x['threshold']):
threshold = d['threshold']
if value >= threshold[0] and value < threshold[1]:
state = {}
if 'state' in d:
state = deepcopy(d['state'])
return d['agent_type'], state
# Check if the definition matches by id (first) or by threshold
if not ((agent_id is not None and threshold == STATIC_THRESHOLD and agent_id in d['ids']) or \
(value >= threshold[0] and value < threshold[1])):
continue
state = {}
if 'state' in d:
state = deepcopy(d['state'])
return d['agent_type'], state
raise Exception('Distribution for value {} not found in: {}'.format(value, distribution))

View File

@@ -4,9 +4,11 @@ import time
import csv
import random
import simpy
import yaml
import tempfile
import pandas as pd
from copy import deepcopy
from collections import Counter
from networkx.readwrite import json_graph
import networkx as nx
@@ -14,6 +16,14 @@ import nxsim
from . import utils, agents, analysis, history
# These properties will be copied when pickling/unpickling the environment
_CONFIG_PROPS = [ 'name',
'states',
'default_state',
'interval',
'dry_run',
'dir_path',
]
class Environment(nxsim.NetworkEnvironment):
"""
@@ -52,7 +62,8 @@ class Environment(nxsim.NetworkEnvironment):
if not dry_run:
self.get_path()
self._history = history.History(name=self.name if not dry_run else None,
dir_path=self.dir_path)
dir_path=self.dir_path,
backup=True)
# Add environment agents first, so their events get
# executed before network agents
self.environment_agents = environment_agents or []
@@ -103,7 +114,7 @@ class Environment(nxsim.NetworkEnvironment):
agent_type = None
if 'agent_type' in self.states.get(agent_id, {}):
agent_type = self.states[agent_id]
agent_type = self.states[agent_id]['agent_type']
elif 'agent_type' in node:
agent_type = node['agent_type']
elif 'agent_type' in self.default_state:
@@ -111,8 +122,8 @@ class Environment(nxsim.NetworkEnvironment):
if agent_type:
agent_type = agents.deserialize_type(agent_type)
else:
agent_type, state = agents._agent_from_distribution(agent_distribution)
elif agent_distribution:
agent_type, state = agents._agent_from_distribution(agent_distribution, agent_id=agent_id)
return self.set_agent(agent_id, agent_type, state)
def set_agent(self, agent_id, agent_type, state=None):
@@ -122,10 +133,12 @@ class Environment(nxsim.NetworkEnvironment):
defstate.update(node.get('state', {}))
if state:
defstate.update(state)
state = defstate
a = agent_type(environment=self,
agent_id=agent_id,
state=state)
a = None
if agent_type:
state = defstate
a = agent_type(environment=self,
agent_id=agent_id,
state=state)
node['agent'] = a
return a
@@ -145,8 +158,10 @@ class Environment(nxsim.NetworkEnvironment):
def run(self, *args, **kwargs):
self._save_state()
self.log_stats()
super().run(*args, **kwargs)
self._history.flush_cache()
self.log_stats()
def _save_state(self, now=None):
# for agent in self.agents:
@@ -319,20 +334,40 @@ class Environment(nxsim.NetworkEnvironment):
return G
def stats(self):
stats = {}
stats['network'] = {}
stats['network']['n_nodes'] = self.G.number_of_nodes()
stats['network']['n_edges'] = self.G.number_of_edges()
c = Counter()
c.update(a.__class__.__name__ for a in self.network_agents)
stats['agents'] = {}
stats['agents']['model_count'] = dict(c)
c2 = Counter()
c2.update(a['id'] for a in self.network_agents)
stats['agents']['state_count'] = dict(c2)
stats['params'] = self.environment_params
return stats
def log_stats(self):
stats = self.stats()
utils.logger.info('Environment stats: \n{}'.format(yaml.dump(stats, default_flow_style=False)))
def __getstate__(self):
state = self.__dict__.copy()
state = {}
for prop in _CONFIG_PROPS:
state[prop] = self.__dict__[prop]
state['G'] = json_graph.node_link_data(self.G)
state['network_agents'] = agents._serialize_distribution(self.network_agents)
state['environment_agents'] = agents._convert_agent_types(self.environment_agents,
to_string=True)
state['environment_agents'] = self._env_agents
state['history'] = self._history
return state
def __setstate__(self, state):
self.__dict__ = state
for prop in _CONFIG_PROPS:
self.__dict__[prop] = state[prop]
self._env_agents = state['environment_agents']
self.G = json_graph.node_link_graph(state['G'])
self.network_agents = self.calculate_distribution(self._convert_agent_types(self.network_agents))
self.environment_agents = self._convert_agent_types(self.environment_agents)
return state
self._history = state['history']
SoilEnvironment = Environment

View File

@@ -3,6 +3,10 @@ import os
import pandas as pd
import sqlite3
import copy
import logging
logger = logging.getLogger(__name__)
from collections import UserDict, namedtuple
from . import utils
@@ -13,7 +17,7 @@ class History:
Store and retrieve values from a sqlite database.
"""
def __init__(self, db_path=None, name=None, dir_path=None, backup=True):
def __init__(self, db_path=None, name=None, dir_path=None, backup=False):
if db_path is None and name:
db_path = os.path.join(dir_path or os.getcwd(),
'{}.db.sqlite'.format(name))
@@ -28,6 +32,7 @@ class History:
self.db = db_path
with self.db:
logger.debug('Creating database {}'.format(self.db_path))
self.db.execute('''CREATE TABLE IF NOT EXISTS history (agent_id text, t_step int, key text, value text text)''')
self.db.execute('''CREATE TABLE IF NOT EXISTS value_types (key text, value_type text)''')
self.db.execute('''CREATE UNIQUE INDEX IF NOT EXISTS idx_history ON history (agent_id, t_step, key);''')
@@ -38,7 +43,7 @@ class History:
def db(self):
try:
self._db.cursor()
except sqlite3.ProgrammingError:
except (sqlite3.ProgrammingError, AttributeError):
self.db = None # Reset the database
return self._db
@@ -46,6 +51,7 @@ class History:
def db(self, db_path=None):
db_path = db_path or self.db_path
if isinstance(db_path, str):
logger.debug('Connecting to database {}'.format(db_path))
self._db = sqlite3.connect(db_path)
else:
self._db = db_path
@@ -110,6 +116,7 @@ class History:
Use a cache to save state changes to avoid opening a session for every change.
The cache will be flushed at the end of the simulation, and when history is accessed.
'''
logger.debug('Flushing cache {}'.format(self.db_path))
with self.db:
for rec in self._tups:
self.db.execute("replace into history(agent_id, t_step, key, value) values (?, ?, ?, ?)", (rec.agent_id, rec.t_step, rec.key, rec.value))
@@ -208,6 +215,16 @@ class History:
df_p = df_p.reindex(t_steps, method='ffill')
return df_p.ffill()
def __getstate__(self):
state = dict(**self.__dict__)
del state['_db']
del state['_dtypes']
return state
def __setstate__(self, state):
self.__dict__ = state
self._dtypes = {}
class Records():

View File

@@ -88,14 +88,8 @@ class Simulation(NetworkSimulation):
environment_agents=None, environment_params=None,
environment_class=None, **kwargs):
if topology is None:
topology = utils.load_network(network_params,
dir_path=dir_path)
elif isinstance(topology, basestring) or isinstance(topology, dict):
topology = json_graph.node_link_graph(topology)
self.seed = str(seed) or str(time.time())
self.load_module = load_module
self.topology = nx.Graph(topology)
self.network_params = network_params
self.name = name or 'UnnamedSimulation'
self.num_trials = num_trials
@@ -103,12 +97,19 @@ class Simulation(NetworkSimulation):
self.default_state = default_state or {}
self.dir_path = dir_path or os.getcwd()
self.interval = interval
self.seed = str(seed) or str(time.time())
self.dump = dump
self.dry_run = dry_run
sys.path += [self.dir_path, os.getcwd()]
if topology is None:
topology = utils.load_network(network_params,
dir_path=self.dir_path)
elif isinstance(topology, basestring) or isinstance(topology, dict):
topology = json_graph.node_link_graph(topology)
self.topology = nx.Graph(topology)
self.environment_params = environment_params or {}
self.environment_class = utils.deserialize(environment_class,
known_modules=['soil.environment', ]) or Environment
@@ -201,7 +202,7 @@ class Simulation(NetworkSimulation):
return self.run_trial(*args, **kwargs)
except Exception as ex:
c = ex.__cause__
c.message = ''.join(traceback.format_tb(c.__traceback__)[3:])
c.message = ''.join(traceback.format_exception(type(c), c, c.__traceback__)[:])
return c
def to_dict(self):

View File

@@ -1,5 +1,6 @@
import os
import ast
import sys
import yaml
import logging
import importlib
@@ -36,9 +37,14 @@ def load_network(network_params, dir_path=None):
return method(path, **kwargs)
net_args = network_params.copy()
net_type = net_args.pop('generator')
net_gen = net_args.pop('generator')
if dir_path not in sys.path:
sys.path.append(dir_path)
method = deserializer(net_gen,
known_modules=['networkx.generators',])
method = getattr(nx.generators, net_type)
return method(**net_args)
@@ -114,6 +120,8 @@ def serialize(v, known_modules=[]):
return func(v), tname
def deserializer(type_, known_modules=[]):
if type(type_) != str: # Already deserialized
return type_
if type_ == 'str':
return lambda x='': x
if type_ == 'None':

View File

@@ -19,7 +19,7 @@ from xml.etree.ElementTree import tostring
from tornado.concurrent import run_on_executor
from concurrent.futures import ThreadPoolExecutor
from ..simulation import SoilSimulation
from ..simulation import Simulation
logger = logging.getLogger(__name__)
logger.setLevel(logging.INFO)
@@ -168,7 +168,7 @@ class SocketHandler(tornado.websocket.WebSocketHandler):
@run_on_executor
def nonblocking(self, config):
simulation = SoilSimulation(**config)
simulation = Simulation(**config)
return simulation.run()
@tornado.gen.coroutine

View File

@@ -120,18 +120,18 @@ class TestHistory(TestCase):
assert os.path.exists(db_path)
# Recover the data
recovered = history.History(db_path=db_path, backup=False)
recovered = history.History(db_path=db_path)
assert recovered['a_1', 0, 'id'] == 'v'
assert recovered['a_1', 4, 'id'] == 'e'
# Using the same name should create a backup copy
# Using backup=True should create a backup copy, and initialize an empty history
newhistory = history.History(db_path=db_path, backup=True)
backuppaths = glob(db_path + '.backup*.sqlite')
assert len(backuppaths) == 1
backuppath = backuppaths[0]
assert newhistory.db_path == h.db_path
assert os.path.exists(backuppath)
assert not len(newhistory[None, None, None])
assert len(newhistory[None, None, None]) == 0
def test_history_tuples(self):
"""

View File

@@ -2,6 +2,7 @@ from unittest import TestCase
import os
import yaml
import pickle
import networkx as nx
from functools import partial
@@ -248,12 +249,10 @@ class TestMain(TestCase):
assert name == 'soil.agents.BaseAgent'
assert ser == agents.BaseAgent
class CustomAgent(agents.BaseAgent):
pass
ser, name = utils.serialize(CustomAgent)
assert name == 'test_main.CustomAgent'
assert ser == CustomAgent
pickle.dumps(ser)
def test_serialize_builtin_types(self):
@@ -269,6 +268,7 @@ class TestMain(TestCase):
assert ser == 'test_main.CustomAgent'
ser = agents.serialize_type(agents.BaseAgent)
assert ser == 'BaseAgent'
pickle.dumps(ser)
def test_deserialize_agent_distribution(self):
agent_distro = [
@@ -284,6 +284,7 @@ class TestMain(TestCase):
converted = agents.deserialize_distribution(agent_distro)
assert converted[0]['agent_type'] == agents.CounterModel
assert converted[1]['agent_type'] == CustomAgent
pickle.dumps(converted)
def test_serialize_agent_distribution(self):
agent_distro = [
@@ -299,6 +300,20 @@ class TestMain(TestCase):
converted = agents.serialize_distribution(agent_distro)
assert converted[0]['agent_type'] == 'CounterModel'
assert converted[1]['agent_type'] == 'test_main.CustomAgent'
pickle.dumps(converted)
def test_pickle_agent_environment(self):
env = Environment(name='Test')
a = agents.BaseAgent(environment=env, agent_id=25)
a['key'] = 'test'
pickled = pickle.dumps(a)
recovered = pickle.loads(pickled)
assert recovered.env.name == 'Test'
assert recovered['key'] == 'test'
assert recovered['key', 0] == 'test'
def test_history(self):
'''Test storing in and retrieving from history (sqlite)'''