1
0
mirror of https://github.com/gsi-upm/soil synced 2024-11-13 06:52:28 +00:00

Merge branch 'mesa'

First iteration to achieve MESA compatibility.
As a side effect, we have removed `simpy`.

For a full list of changes, see `CHANGELOG.md`.
This commit is contained in:
J. Fernando Sánchez 2022-03-07 10:54:47 +01:00
commit 38f8a8d110
44 changed files with 1121 additions and 996 deletions

1
.gitignore vendored
View File

@ -8,3 +8,4 @@ soil_output
docs/_build*
build/*
dist/*
prof

View File

@ -3,6 +3,21 @@ All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## [0.20.0]
### Added
* Integration with MESA
* `not_agent_ids` parameter to get sql in history
### Changed
* `soil.Environment` now also inherits from `mesa.Model`
* `soil.Agent` now also inherits from `mesa.Agent`
* `soil.time` to replace `simpy` events, delays, duration, etc.
* `agent.id` is not `agent.unique_id` to be compatible with `mesa`. A property `BaseAgent.id` has been added for compatibility.
* `agent.environment` is now `agent.model`, for the same reason as above. The parameter name in `BaseAgent.__init__` has also been renamed.
### Removed
* `simpy` dependency and compatibility. Each agent used to be a simpy generator, but that made debugging and error handling more complex. That has been replaced by a scheduler within the `soil.Environment` class, similar to how `mesa` does it.
* `soil.history` is now a separate package named `tsih`. The keys namedtuple uses `dict_id` instead of `agent_id`.
### Added
* An option to choose whether a database should be used for history
## [0.15.2]
### Fixed
* Pass the right known_modules and parameters to stats discovery in simulation

View File

@ -5,6 +5,9 @@ Learn how to run your own simulations with our [documentation](http://soilsim.re
Follow our [tutorial](examples/tutorial/soil_tutorial.ipynb) to develop your own agent models.
## Citation
If you use Soil in your research, don't forget to cite this paper:
```bibtex
@ -28,7 +31,24 @@ If you use Soil in your research, don't forget to cite this paper:
```
@Copyright GSI - Universidad Politécnica de Madrid 2017
## Mesa compatibility
Soil is in the process of becoming fully compatible with MESA.
As of this writing,
This is a non-exhaustive list of tasks to achieve compatibility:
* Environments.agents and mesa.Agent.agents are not the same. env is a property, and it only takes into account network and environment agents. Might rename environment_agents to other_agents or sth like that
- [ ] Integrate `soil.Simulation` with mesa's runners:
- [ ] `soil.Simulation` could mimic/become a `mesa.batchrunner`
- [ ] Integrate `soil.Environment` with `mesa.Model`:
- [x] `Soil.Environment` inherits from `mesa.Model`
- [x] `Soil.Environment` includes a Mesa-like Scheduler (see the `soil.time` module.
- [ ] Integrate `soil.Agent` with `mesa.Agent`:
- [x] Rename agent.id to unique_id?
- [x] mesa agents can be used in soil simulations (see `examples/mesa`)
- [ ] Document the new APIs and usage
@Copyright GSI - Universidad Politécnica de Madrid 2017-2021
[![SOIL](logo_gsi.png)](https://www.gsi.upm.es)

View File

@ -47,12 +47,6 @@ There are three main elements in a soil simulation:
- The environment. It assigns agents to nodes in the network, and
stores the environment parameters (shared state for all agents).
Soil is based on ``simpy``, which is an event-based network simulation
library. Soil provides several abstractions over events to make
developing agents easier. This means you can use events (timeouts,
delays) in soil, but for the most part we will assume your models will
be step-based.
Modeling behaviour
------------------

View File

@ -13,7 +13,7 @@ network_agents:
- agent_type: CounterModel
weight: 1
state:
id: 0
state_id: 0
- agent_type: AggregatedCounter
weight: 0.2
environment_agents: []

View File

@ -13,4 +13,4 @@ network_agents:
- agent_type: CounterModel
weight: 1
state:
id: 0
state_id: 0

21
examples/mesa/mesa.yml Normal file
View File

@ -0,0 +1,21 @@
---
name: mesa_sim
group: tests
dir_path: "/tmp"
num_trials: 3
max_time: 100
interval: 1
seed: '1'
network_params:
generator: social_wealth.graph_generator
n: 5
network_agents:
- agent_type: social_wealth.SocialMoneyAgent
weight: 1
environment_class: social_wealth.MoneyEnv
environment_params:
num_mesa_agents: 5
mesa_agent_type: social_wealth.MoneyAgent
N: 10
width: 50
height: 50

105
examples/mesa/server.py Normal file
View File

@ -0,0 +1,105 @@
from mesa.visualization.ModularVisualization import ModularServer
from soil.visualization import UserSettableParameter
from mesa.visualization.modules import ChartModule, NetworkModule, CanvasGrid
from social_wealth import MoneyEnv, graph_generator, SocialMoneyAgent
class MyNetwork(NetworkModule):
def render(self, model):
return self.portrayal_method(model)
def network_portrayal(env):
# The model ensures there is 0 or 1 agent per node
portrayal = dict()
portrayal["nodes"] = [
{
"id": agent_id,
"size": env.get_agent(agent_id).wealth,
# "color": "#CC0000" if not agents or agents[0].wealth == 0 else "#007959",
"color": "#CC0000",
"label": f"{agent_id}: {env.get_agent(agent_id).wealth}",
}
for (agent_id) in env.G.nodes
]
portrayal["edges"] = [
{"id": edge_id, "source": source, "target": target, "color": "#000000"}
for edge_id, (source, target) in enumerate(env.G.edges)
]
return portrayal
def gridPortrayal(agent):
"""
This function is registered with the visualization server to be called
each tick to indicate how to draw the agent in its current state.
:param agent: the agent in the simulation
:return: the portrayal dictionary
"""
color = max(10, min(agent.wealth*10, 100))
return {
"Shape": "rect",
"w": 1,
"h": 1,
"Filled": "true",
"Layer": 0,
"Label": agent.unique_id,
"Text": agent.unique_id,
"x": agent.pos[0],
"y": agent.pos[1],
"Color": f"rgba(31, 10, 255, 0.{color})"
}
grid = MyNetwork(network_portrayal, 500, 500, library="sigma")
chart = ChartModule(
[{"Label": "Gini", "Color": "Black"}], data_collector_name="datacollector"
)
model_params = {
"N": UserSettableParameter(
"slider",
"N",
5,
1,
10,
1,
description="Choose how many agents to include in the model",
),
"network_agents": [{"agent_type": SocialMoneyAgent}],
"height": UserSettableParameter(
"slider",
"height",
5,
5,
10,
1,
description="Grid height",
),
"width": UserSettableParameter(
"slider",
"width",
5,
5,
10,
1,
description="Grid width",
),
"network_params": {
'generator': graph_generator
},
}
canvas_element = CanvasGrid(gridPortrayal, model_params["width"].value, model_params["height"].value, 500, 500)
server = ModularServer(
MoneyEnv, [grid, chart, canvas_element], "Money Model", model_params
)
server.port = 8521
server.launch(open_browser=False)

View File

@ -0,0 +1,120 @@
'''
This is an example that adds soil agents and environment in a normal
mesa workflow.
'''
from mesa import Agent as MesaAgent
from mesa.space import MultiGrid
# from mesa.time import RandomActivation
from mesa.datacollection import DataCollector
from mesa.batchrunner import BatchRunner
import networkx as nx
from soil import NetworkAgent, Environment
def compute_gini(model):
agent_wealths = [agent.wealth for agent in model.agents]
x = sorted(agent_wealths)
N = len(list(model.agents))
B = sum( xi * (N-i) for i,xi in enumerate(x) ) / (N*sum(x))
return (1 + (1/N) - 2*B)
class MoneyAgent(MesaAgent):
"""
A MESA agent with fixed initial wealth.
It will only share wealth with neighbors based on grid proximity
"""
def __init__(self, unique_id, model):
super().__init__(unique_id=unique_id, model=model)
self.wealth = 1
def move(self):
possible_steps = self.model.grid.get_neighborhood(
self.pos,
moore=True,
include_center=False)
new_position = self.random.choice(possible_steps)
self.model.grid.move_agent(self, new_position)
def give_money(self):
cellmates = self.model.grid.get_cell_list_contents([self.pos])
if len(cellmates) > 1:
other = self.random.choice(cellmates)
other.wealth += 1
self.wealth -= 1
def step(self):
self.info("Crying wolf", self.pos)
self.move()
if self.wealth > 0:
self.give_money()
class SocialMoneyAgent(NetworkAgent, MoneyAgent):
wealth = 1
def give_money(self):
cellmates = set(self.model.grid.get_cell_list_contents([self.pos]))
friends = set(self.get_neighboring_agents())
self.info("Trying to give money")
self.debug("Cellmates: ", cellmates)
self.debug("Friends: ", friends)
nearby_friends = list(cellmates & friends)
if len(nearby_friends):
other = self.random.choice(nearby_friends)
other.wealth += 1
self.wealth -= 1
class MoneyEnv(Environment):
"""A model with some number of agents."""
def __init__(self, N, width, height, *args, network_params, **kwargs):
network_params['n'] = N
super().__init__(*args, network_params=network_params, **kwargs)
self.grid = MultiGrid(width, height, False)
# Create agents
for agent in self.agents:
x = self.random.randrange(self.grid.width)
y = self.random.randrange(self.grid.height)
self.grid.place_agent(agent, (x, y))
self.datacollector = DataCollector(
model_reporters={"Gini": compute_gini},
agent_reporters={"Wealth": "wealth"})
def graph_generator(n=5):
G = nx.Graph()
for ix in range(n):
G.add_edge(0, ix)
return G
if __name__ == '__main__':
G = graph_generator()
fixed_params = {"topology": G,
"width": 10,
"network_agents": [{"agent_type": SocialMoneyAgent,
'weight': 1}],
"height": 10}
variable_params = {"N": range(10, 100, 10)}
batch_run = BatchRunner(MoneyEnv,
variable_parameters=variable_params,
fixed_parameters=fixed_params,
iterations=5,
max_steps=100,
model_reporters={"Gini": compute_gini})
batch_run.run_all()
run_data = batch_run.get_model_vars_dataframe()
run_data.head()
print(run_data.Gini)

83
examples/mesa/wealth.py Normal file
View File

@ -0,0 +1,83 @@
from mesa import Agent, Model
from mesa.space import MultiGrid
from mesa.time import RandomActivation
from mesa.datacollection import DataCollector
from mesa.batchrunner import BatchRunner
def compute_gini(model):
agent_wealths = [agent.wealth for agent in model.schedule.agents]
x = sorted(agent_wealths)
N = model.num_agents
B = sum( xi * (N-i) for i,xi in enumerate(x) ) / (N*sum(x))
return (1 + (1/N) - 2*B)
class MoneyAgent(Agent):
""" An agent with fixed initial wealth."""
def __init__(self, unique_id, model):
super().__init__(unique_id, model)
self.wealth = 1
def move(self):
possible_steps = self.model.grid.get_neighborhood(
self.pos,
moore=True,
include_center=False)
new_position = self.random.choice(possible_steps)
self.model.grid.move_agent(self, new_position)
def give_money(self):
cellmates = self.model.grid.get_cell_list_contents([self.pos])
if len(cellmates) > 1:
other = self.random.choice(cellmates)
other.wealth += 1
self.wealth -= 1
def step(self):
self.move()
if self.wealth > 0:
self.give_money()
class MoneyModel(Model):
"""A model with some number of agents."""
def __init__(self, N, width, height):
self.num_agents = N
self.grid = MultiGrid(width, height, True)
self.schedule = RandomActivation(self)
self.running = True
# Create agents
for i in range(self.num_agents):
a = MoneyAgent(i, self)
self.schedule.add(a)
# Add the agent to a random grid cell
x = self.random.randrange(self.grid.width)
y = self.random.randrange(self.grid.height)
self.grid.place_agent(a, (x, y))
self.datacollector = DataCollector(
model_reporters={"Gini": compute_gini},
agent_reporters={"Wealth": "wealth"})
def step(self):
self.datacollector.collect(self)
self.schedule.step()
if __name__ == '__main__':
fixed_params = {"width": 10,
"height": 10}
variable_params = {"N": range(10, 500, 10)}
batch_run = BatchRunner(MoneyModel,
variable_params,
fixed_params,
iterations=5,
max_steps=100,
model_reporters={"Gini": compute_gini})
batch_run.run_all()
run_data = batch_run.get_model_vars_dataframe()
run_data.head()
print(run_data.Gini)

View File

@ -68,12 +68,12 @@ network_agents:
- agent_type: HerdViewer
state:
has_tv: true
id: neutral
state_id: neutral
weight: 1
- agent_type: HerdViewer
state:
has_tv: true
id: neutral
state_id: neutral
weight: 1
network_params:
generator: barabasi_albert_graph
@ -95,7 +95,7 @@ network_agents:
- agent_type: HerdViewer
state:
has_tv: true
id: neutral
state_id: neutral
weight: 1
- agent_type: WiseViewer
state:
@ -121,7 +121,7 @@ network_agents:
- agent_type: WiseViewer
state:
has_tv: true
id: neutral
state_id: neutral
weight: 1
- agent_type: WiseViewer
state:

View File

@ -34,8 +34,6 @@ class HerdViewer(DumbViewer):
A viewer whose probability of infection depends on the state of its neighbors.
'''
level = logging.DEBUG
def infect(self):
infected = self.count_neighboring_agents(state_id=self.infected.id)
total = self.count_neighboring_agents()

View File

@ -1,7 +1,6 @@
from soil.agents import FSM, state, default_state, BaseAgent, NetworkAgent
from enum import Enum
from random import random, choice
from itertools import islice
import logging
import math
@ -22,7 +21,7 @@ class RabbitModel(FSM):
'offspring': 0,
}
sexual_maturity = 4*30
sexual_maturity = 3 #4*30
life_expectancy = 365 * 3
gestation = 33
pregnancy = -1
@ -31,9 +30,11 @@ class RabbitModel(FSM):
@default_state
@state
def newborn(self):
self.debug(f'I am a newborn at age {self["age"]}')
self['age'] += 1
if self['age'] >= self.sexual_maturity:
self.debug('I am fertile!')
return self.fertile
@state
@ -46,8 +47,7 @@ class RabbitModel(FSM):
return
# Males try to mate
females = self.get_agents(state_id=self.fertile.id, gender=Genders.female.value, limit_neighbors=False)
for f in islice(females, self.max_females):
for f in self.get_agents(state_id=self.fertile.id, gender=Genders.female.value, limit_neighbors=False, limit=self.max_females):
r = random()
if r < self['mating_prob']:
self.impregnate(f)

View File

@ -1,7 +1,7 @@
---
load_module: rabbit_agents
name: rabbits_example
max_time: 500
max_time: 150
interval: 1
seed: MySeed
agent_type: RabbitModel

View File

@ -0,0 +1,31 @@
'''
Example of setting a
Example of a fully programmatic simulation, without definition files.
'''
from soil import Simulation, agents
from soil.time import Delta
from networkx import Graph
from random import expovariate
import logging
class MyAgent(agents.FSM):
@agents.default_state
@agents.state
def neutral(self):
self.info('I am running')
return None, Delta(expovariate(1/16))
s = Simulation(name='Programmatic',
network_agents=[{'agent_type': MyAgent, 'id': 0}],
topology={'nodes': [{'id': 0}], 'links': []},
num_trials=1,
max_time=100,
agent_type=MyAgent,
dry_run=True)
logging.basicConfig(level=logging.INFO)
envs = s.run()

View File

@ -16,7 +16,7 @@ template:
- agent_type: CounterModel
weight: "{{ x1 }}"
state:
id: 0
state_id: 0
- agent_type: AggregatedCounter
weight: "{{ 1 - x1 }}"
environment_params:

View File

@ -18,12 +18,12 @@ class TerroristSpreadModel(FSM, Geo):
prob_interaction
"""
def __init__(self, environment=None, agent_id=0, state=()):
super().__init__(environment=environment, agent_id=agent_id, state=state)
def __init__(self, model=None, unique_id=0, state=()):
super().__init__(model=model, unique_id=unique_id, state=state)
self.information_spread_intensity = environment.environment_params['information_spread_intensity']
self.terrorist_additional_influence = environment.environment_params['terrorist_additional_influence']
self.prob_interaction = environment.environment_params['prob_interaction']
self.information_spread_intensity = model.environment_params['information_spread_intensity']
self.terrorist_additional_influence = model.environment_params['terrorist_additional_influence']
self.prob_interaction = model.environment_params['prob_interaction']
if self['id'] == self.civilian.id: # Civilian
self.mean_belief = random.uniform(0.00, 0.5)
@ -34,10 +34,10 @@ class TerroristSpreadModel(FSM, Geo):
else:
raise Exception('Invalid state id: {}'.format(self['id']))
if 'min_vulnerability' in environment.environment_params:
self.vulnerability = random.uniform( environment.environment_params['min_vulnerability'], environment.environment_params['max_vulnerability'] )
if 'min_vulnerability' in model.environment_params:
self.vulnerability = random.uniform( model.environment_params['min_vulnerability'], model.environment_params['max_vulnerability'] )
else :
self.vulnerability = random.uniform( 0, environment.environment_params['max_vulnerability'] )
self.vulnerability = random.uniform( 0, model.environment_params['max_vulnerability'] )
@state
@ -93,11 +93,11 @@ class TrainingAreaModel(FSM, Geo):
Requires TerroristSpreadModel.
"""
def __init__(self, environment=None, agent_id=0, state=()):
super().__init__(environment=environment, agent_id=agent_id, state=state)
self.training_influence = environment.environment_params['training_influence']
if 'min_vulnerability' in environment.environment_params:
self.min_vulnerability = environment.environment_params['min_vulnerability']
def __init__(self, model=None, unique_id=0, state=()):
super().__init__(model=model, unique_id=unique_id, state=state)
self.training_influence = model.environment_params['training_influence']
if 'min_vulnerability' in model.environment_params:
self.min_vulnerability = model.environment_params['min_vulnerability']
else: self.min_vulnerability = 0
@default_state
@ -120,13 +120,13 @@ class HavenModel(FSM, Geo):
Requires TerroristSpreadModel.
"""
def __init__(self, environment=None, agent_id=0, state=()):
super().__init__(environment=environment, agent_id=agent_id, state=state)
self.haven_influence = environment.environment_params['haven_influence']
if 'min_vulnerability' in environment.environment_params:
self.min_vulnerability = environment.environment_params['min_vulnerability']
def __init__(self, model=None, unique_id=0, state=()):
super().__init__(model=model, unique_id=unique_id, state=state)
self.haven_influence = model.environment_params['haven_influence']
if 'min_vulnerability' in model.environment_params:
self.min_vulnerability = model.environment_params['min_vulnerability']
else: self.min_vulnerability = 0
self.max_vulnerability = environment.environment_params['max_vulnerability']
self.max_vulnerability = model.environment_params['max_vulnerability']
def get_occupants(self, **kwargs):
return self.get_neighboring_agents(agent_type=TerroristSpreadModel, **kwargs)
@ -162,13 +162,13 @@ class TerroristNetworkModel(TerroristSpreadModel):
weight_link_distance
"""
def __init__(self, environment=None, agent_id=0, state=()):
super().__init__(environment=environment, agent_id=agent_id, state=state)
def __init__(self, model=None, unique_id=0, state=()):
super().__init__(model=model, unique_id=unique_id, state=state)
self.vision_range = environment.environment_params['vision_range']
self.sphere_influence = environment.environment_params['sphere_influence']
self.weight_social_distance = environment.environment_params['weight_social_distance']
self.weight_link_distance = environment.environment_params['weight_link_distance']
self.vision_range = model.environment_params['vision_range']
self.sphere_influence = model.environment_params['sphere_influence']
self.weight_social_distance = model.environment_params['weight_social_distance']
self.weight_link_distance = model.environment_params['weight_link_distance']
@state
def terrorist(self):

View File

@ -1,9 +1,9 @@
simpy>=4.0
networkx>=2.5
numpy
matplotlib
pyyaml>=5.1
pandas>=0.23
scipy>=1.3
SALib>=1.3
Jinja2
Mesa>=0.8
tsih>=0.1.5

View File

@ -16,6 +16,12 @@ def parse_requirements(filename):
install_reqs = parse_requirements("requirements.txt")
test_reqs = parse_requirements("test-requirements.txt")
extras_require={
'mesa': ['mesa>=0.8.9'],
'geo': ['scipy>=1.3'],
'web': ['tornado']
}
extras_require['all'] = [dep for package in extras_require.values() for dep in package]
setup(
@ -40,10 +46,7 @@ setup(
'Operating System :: POSIX',
'Programming Language :: Python :: 3'],
install_requires=install_reqs,
extras_require={
'web': ['tornado']
},
extras_require=extras_require,
tests_require=test_reqs,
setup_requires=['pytest-runner', ],
include_package_data=True,

View File

@ -1 +1 @@
0.15.2
0.20.0

View File

@ -11,13 +11,14 @@ try:
except NameError:
basestring = str
from .agents import *
from . import agents
from .simulation import *
from .environment import Environment
from .history import History
from . import serialization
from . import analysis
from .utils import logger
from .time import *
def main():
import argparse

View File

@ -1,40 +1,31 @@
import random
from . import BaseAgent
from . import FSM, state, default_state
class BassModel(BaseAgent):
class BassModel(FSM):
"""
Settings:
innovation_prob
imitation_prob
"""
def __init__(self, environment, agent_id, state, **kwargs):
super().__init__(environment=environment, agent_id=agent_id, state=state)
env_params = environment.environment_params
self.state['sentimentCorrelation'] = 0
sentimentCorrelation = 0
def step(self):
self.behaviour()
def behaviour(self):
# Outside effects
if random.random() < self['innovation_prob']:
if self.state['id'] == 0:
self.state['id'] = 1
self.state['sentimentCorrelation'] = 1
else:
pass
return
# Imitation effects
if self.state['id'] == 0:
aware_neighbors = self.get_neighboring_agents(state_id=1)
@default_state
@state
def innovation(self):
if random.random() < self.innovation_prob:
self.sentimentCorrelation = 1
return self.aware
else:
aware_neighbors = self.get_neighboring_agents(state_id=self.aware.id)
num_neighbors_aware = len(aware_neighbors)
if random.random() < (self['imitation_prob']*num_neighbors_aware):
self.state['id'] = 1
self.state['sentimentCorrelation'] = 1
self.sentimentCorrelation = 1
return self.aware
else:
pass
@state
def aware(self):
self.die()

View File

@ -1,8 +1,8 @@
import random
from . import BaseAgent
from . import FSM, state, default_state
class BigMarketModel(BaseAgent):
class BigMarketModel(FSM):
"""
Settings:
Names:
@ -19,34 +19,25 @@ class BigMarketModel(BaseAgent):
sentiment_about [Array]
"""
def __init__(self, environment=None, agent_id=0, state=()):
super().__init__(environment=environment, agent_id=agent_id, state=state)
self.enterprises = environment.environment_params['enterprises']
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.enterprises = self.env.environment_params['enterprises']
self.type = ""
self.number_of_enterprises = len(environment.environment_params['enterprises'])
if self.id < self.number_of_enterprises: # Enterprises
self.state['id'] = self.id
if self.id < len(self.enterprises): # Enterprises
self.set_state(self.enterprise.id)
self.type = "Enterprise"
self.tweet_probability = environment.environment_params['tweet_probability_enterprises'][self.id]
else: # normal users
self.state['id'] = self.number_of_enterprises
self.type = "User"
self.set_state(self.user.id)
self.tweet_probability = environment.environment_params['tweet_probability_users']
self.tweet_relevant_probability = environment.environment_params['tweet_relevant_probability']
self.tweet_probability_about = environment.environment_params['tweet_probability_about'] # List
self.sentiment_about = environment.environment_params['sentiment_about'] # List
def step(self):
if self.id < self.number_of_enterprises: # Enterprise
self.enterpriseBehaviour()
else: # Usuario
self.userBehaviour()
for i in range(self.number_of_enterprises): # So that it never is set to 0 if there are not changes (logs)
self.attrs['sentiment_enterprise_%s'% self.enterprises[i]] = self.sentiment_about[i]
def enterpriseBehaviour(self):
@state
def enterprise(self):
if random.random() < self.tweet_probability: # Tweets
aware_neighbors = self.get_neighboring_agents(state_id=self.number_of_enterprises) # Nodes neighbour users
@ -64,12 +55,12 @@ class BigMarketModel(BaseAgent):
x.attrs['sentiment_enterprise_%s'% self.enterprises[self.id]] = x.sentiment_about[self.id]
def userBehaviour(self):
@state
def user(self):
if random.random() < self.tweet_probability: # Tweets
if random.random() < self.tweet_relevant_probability: # Tweets something relevant
# Tweet probability per enterprise
for i in range(self.number_of_enterprises):
for i in range(len(self.enterprises)):
random_num = random.random()
if random_num < self.tweet_probability_about[i]:
# The condition is fulfilled, sentiments are evaluated towards that enterprise
@ -82,8 +73,10 @@ class BigMarketModel(BaseAgent):
else:
# POSITIVO
self.userTweets("positive",i)
for i in range(len(self.enterprises)): # So that it never is set to 0 if there are not changes (logs)
self.attrs['sentiment_enterprise_%s'% self.enterprises[i]] = self.sentiment_about[i]
def userTweets(self,sentiment,enterprise):
def userTweets(self, sentiment,enterprise):
aware_neighbors = self.get_neighboring_agents(state_id=self.number_of_enterprises) # Nodes neighbours users
for x in aware_neighbors:
if sentiment == "positive":

21
soil/agents/Geo.py Normal file
View File

@ -0,0 +1,21 @@
from scipy.spatial import cKDTree as KDTree
import networkx as nx
from . import NetworkAgent, as_node
class Geo(NetworkAgent):
'''In this type of network, nodes have a "pos" attribute.'''
def geo_search(self, radius, node=None, center=False, **kwargs):
'''Get a list of nodes whose coordinates are closer than *radius* to *node*.'''
node = as_node(node if node is not None else self)
G = self.subgraph(**kwargs)
pos = nx.get_node_attributes(G, 'pos')
if not pos:
return []
nodes, coords = list(zip(*pos.items()))
kdtree = KDTree(coords) # Cannot provide generator.
indices = kdtree.query_ball_point(pos[node], radius)
return [nodes[i] for i in indices if center or (nodes[i] != node)]

View File

@ -10,10 +10,10 @@ class IndependentCascadeModel(BaseAgent):
imitation_prob
"""
def __init__(self, environment=None, agent_id=0, state=()):
super().__init__(environment=environment, agent_id=agent_id, state=state)
self.innovation_prob = environment.environment_params['innovation_prob']
self.imitation_prob = environment.environment_params['imitation_prob']
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.innovation_prob = self.env.environment_params['innovation_prob']
self.imitation_prob = self.env.environment_params['imitation_prob']
self.state['time_awareness'] = 0
self.state['sentimentCorrelation'] = 0

View File

@ -21,8 +21,8 @@ class SpreadModelM2(BaseAgent):
prob_generate_anti_rumor
"""
def __init__(self, environment=None, agent_id=0, state=()):
super().__init__(environment=environment, agent_id=agent_id, state=state)
def __init__(self, model=None, unique_id=0, state=()):
super().__init__(model=environment, unique_id=unique_id, state=state)
self.prob_neutral_making_denier = np.random.normal(environment.environment_params['prob_neutral_making_denier'],
environment.environment_params['standard_variance'])
@ -123,8 +123,8 @@ class ControlModelM2(BaseAgent):
"""
def __init__(self, environment=None, agent_id=0, state=()):
super().__init__(environment=environment, agent_id=agent_id, state=state)
def __init__(self, model=None, unique_id=0, state=()):
super().__init__(model=environment, unique_id=unique_id, state=state)
self.prob_neutral_making_denier = np.random.normal(environment.environment_params['prob_neutral_making_denier'],
environment.environment_params['standard_variance'])

View File

@ -29,8 +29,8 @@ class SISaModel(FSM):
standard_variance
"""
def __init__(self, environment, agent_id=0, state=()):
super().__init__(environment=environment, agent_id=agent_id, state=state)
def __init__(self, environment, unique_id=0, state=()):
super().__init__(model=environment, unique_id=unique_id, state=state)
self.neutral_discontent_spon_prob = np.random.normal(self.env['neutral_discontent_spon_prob'],
self.env['standard_variance'])

View File

@ -16,8 +16,8 @@ class SentimentCorrelationModel(BaseAgent):
disgust_prob
"""
def __init__(self, environment, agent_id=0, state=()):
super().__init__(environment=environment, agent_id=agent_id, state=state)
def __init__(self, environment, unique_id=0, state=()):
super().__init__(model=environment, unique_id=unique_id, state=state)
self.outside_effects_prob = environment.environment_params['outside_effects_prob']
self.anger_prob = environment.environment_params['anger_prob']
self.joy_prob = environment.environment_params['joy_prob']

View File

@ -1,21 +1,16 @@
# networkStatus = {} # Dict that will contain the status of every agent in the network
# sentimentCorrelationNodeArray = []
# for x in range(0, settings.network_params["number_of_nodes"]):
# sentimentCorrelationNodeArray.append({'id': x})
# Initialize agent states. Let's assume everyone is normal.
import logging
from collections import OrderedDict
from collections import OrderedDict, defaultdict
from copy import deepcopy
from functools import partial
from scipy.spatial import cKDTree as KDTree
from functools import partial, wraps
from itertools import islice
import json
import simpy
import networkx as nx
from functools import wraps
from .. import serialization, utils, time
from .. import serialization, history, utils
from tsih import Key
from mesa import Agent
def as_node(agent):
@ -23,40 +18,50 @@ def as_node(agent):
return agent.id
return agent
IGNORED_FIELDS = ('model', 'logger')
class BaseAgent:
class BaseAgent(Agent):
"""
A special simpy BaseAgent that keeps track of its state history.
A special Agent that keeps track of its state history.
"""
defaults = {}
def __init__(self, environment, agent_id, state=None,
name=None, interval=None):
def __init__(self,
unique_id,
model,
name=None,
interval=None):
# Check for REQUIRED arguments
assert environment is not None, TypeError('__init__ missing 1 required keyword argument: \'environment\'. '
'Cannot be NoneType.')
# Initialize agent parameters
self.id = agent_id
self.name = name or '{}[{}]'.format(type(self).__name__, self.id)
# Register agent to environment
self.env = environment
if isinstance(unique_id, Agent):
raise Exception()
self._saved = set()
super().__init__(unique_id=unique_id, model=model)
self.name = name or '{}[{}]'.format(type(self).__name__, self.unique_id)
self._neighbors = None
self.alive = True
real_state = deepcopy(self.defaults)
real_state.update(state or {})
self.state = real_state
self.interval = interval
self.logger = logging.getLogger(self.env.name).getChild(self.name)
self.interval = interval or self.get('interval', 1)
self.logger = logging.getLogger(self.model.name).getChild(self.name)
if hasattr(self, 'level'):
self.logger.setLevel(self.level)
# initialize every time an instance of the agent is created
self.action = self.env.process(self.run())
# TODO: refactor to clean up mesa compatibility
@property
def id(self):
return self.unique_id
@property
def env(self):
return self.model
@env.setter
def env(self, model):
self.model = model
@property
def state(self):
@ -70,40 +75,47 @@ class BaseAgent:
@state.setter
def state(self, value):
self._state = {}
for k, v in value.items():
self[k] = v
@property
def environment_params(self):
return self.env.environment_params
return self.model.environment_params
@environment_params.setter
def environment_params(self, value):
self.env.environment_params = value
self.model.environment_params = value
def __setattr__(self, key, value):
if not key.startswith('_') and key not in IGNORED_FIELDS:
try:
k = Key(t_step=self.now,
dict_id=self.unique_id,
key=key)
self._saved.add(key)
self.model[k] = value
except AttributeError:
pass
super().__setattr__(key, value)
def __getitem__(self, key):
if isinstance(key, tuple):
key, t_step = key
k = history.Key(key=key, t_step=t_step, agent_id=self.id)
return self.env[k]
return self._state.get(key, None)
k = Key(key=key, t_step=t_step, dict_id=self.unique_id)
return self.model[k]
return getattr(self, key)
def __delitem__(self, key):
self._state[key] = None
return delattr(self, key)
def __contains__(self, key):
return key in self._state
return hasattr(self, key)
def __setitem__(self, key, value):
self._state[key] = value
k = history.Key(t_step=self.now,
agent_id=self.id,
key=key)
self.env[k] = value
setattr(self, key, value)
def items(self):
return self._state.items()
return ((k, getattr(self, k)) for k in self._saved)
def get(self, key, default=None):
return self[key] if key in self else default
@ -111,29 +123,33 @@ class BaseAgent:
@property
def now(self):
try:
return self.env.now
return self.model.now
except AttributeError:
# No environment
return None
def run(self):
if self.interval is not None:
interval = self.interval
elif 'interval' in self:
interval = self['interval']
else:
interval = self.env.interval
while self.alive:
res = self.step()
yield res or self.env.timeout(interval)
def die(self, remove=False):
self.alive = False
if remove:
self.remove_node(self.id)
def step(self):
return
if not self.alive:
return time.When('inf')
return super().step() or time.Delta(self.interval)
def log(self, message, *args, level=logging.INFO, **kwargs):
if not self.logger.isEnabledFor(level):
return
message = message + " ".join(str(i) for i in args)
message = " @{:>3}: {}".format(self.now, message)
for k, v in kwargs:
message += " {k}={v} ".format(k, v)
extra = {}
extra['now'] = self.now
extra['unique_id'] = self.unique_id
extra['agent_name'] = self.name
return self.logger.log(level, message, extra=extra)
def debug(self, *args, **kwargs):
return self.log(*args, level=logging.DEBUG, **kwargs)
@ -141,35 +157,16 @@ class BaseAgent:
def info(self, *args, **kwargs):
return self.log(*args, level=logging.INFO, **kwargs)
def __getstate__(self):
'''
Serializing an agent will lose all its running information (you cannot
serialize an iterator), but it keeps the state and link to the environment,
so it can be used for inspection and dumping to a file
'''
state = {}
state['id'] = self.id
state['environment'] = self.env
state['_state'] = self._state
return state
def __setstate__(self, state):
'''
Get back a serialized agent and try to re-compose it
'''
self.id = state['id']
self._state = state['_state']
self.env = state['environment']
class NetworkAgent(BaseAgent):
@property
def topology(self):
return self.env.G
return self.model.G
@property
def G(self):
return self.env.G
return self.model.G
def count_agents(self, **kwargs):
return len(list(self.get_agents(**kwargs)))
@ -180,39 +177,34 @@ class NetworkAgent(BaseAgent):
def get_neighboring_agents(self, state_id=None, **kwargs):
return self.get_agents(limit_neighbors=True, state_id=state_id, **kwargs)
def get_agents(self, agents=None, limit_neighbors=False, **kwargs):
def get_agents(self, *args, limit=None, **kwargs):
it = self.iter_agents(*args, **kwargs)
if limit is not None:
it = islice(it, limit)
return list(it)
def iter_agents(self, agents=None, limit_neighbors=False, **kwargs):
if limit_neighbors:
agents = self.topology.neighbors(self.id)
agents = self.topology.neighbors(self.unique_id)
agents = self.env.get_agents(agents)
agents = self.model.get_agents(agents)
return select(agents, **kwargs)
def log(self, message, *args, level=logging.INFO, **kwargs):
message = message + " ".join(str(i) for i in args)
message = " @{:>3}: {}".format(self.now, message)
for k, v in kwargs:
message += " {k}={v} ".format(k, v)
extra = {}
extra['now'] = self.now
extra['agent_id'] = self.id
extra['agent_name'] = self.name
return self.logger.log(level, message, extra=extra)
def subgraph(self, center=True, **kwargs):
include = [self] if center else []
return self.topology.subgraph(n.id for n in self.get_agents(**kwargs)+include)
return self.topology.subgraph(n.unique_id for n in list(self.get_agents(**kwargs))+include)
def remove_node(self, agent_id):
self.topology.remove_node(agent_id)
def remove_node(self, unique_id):
self.topology.remove_node(unique_id)
def add_edge(self, other, edge_attr_dict=None, *edge_attrs):
# return super(NetworkAgent, self).add_edge(node1=self.id, node2=other, **kwargs)
if self.id not in self.topology.nodes(data=False):
raise ValueError('{} not in list of existing agents in the network'.format(self.id))
if other not in self.topology.nodes(data=False):
if self.unique_id not in self.topology.nodes(data=False):
raise ValueError('{} not in list of existing agents in the network'.format(self.unique_id))
if other.unique_id not in self.topology.nodes(data=False):
raise ValueError('{} not in list of existing agents in the network'.format(other))
self.topology.add_edge(self.id, other, edge_attr_dict=edge_attr_dict, *edge_attrs)
self.topology.add_edge(self.unique_id, other.unique_id, edge_attr_dict=edge_attr_dict, *edge_attrs)
def ego_search(self, steps=1, center=False, node=None, **kwargs):
@ -223,17 +215,17 @@ class NetworkAgent(BaseAgent):
def degree(self, node, force=False):
node = as_node(node)
if force or (not hasattr(self.env, '_degree')) or getattr(self.env, '_last_step', 0) < self.now:
self.env._degree = nx.degree_centrality(self.topology)
self.env._last_step = self.now
return self.env._degree[node]
if force or (not hasattr(self.model, '_degree')) or getattr(self.model, '_last_step', 0) < self.now:
self.model._degree = nx.degree_centrality(self.topology)
self.model._last_step = self.now
return self.model._degree[node]
def betweenness(self, node, force=False):
node = as_node(node)
if force or (not hasattr(self.env, '_betweenness')) or getattr(self.env, '_last_step', 0) < self.now:
self.env._betweenness = nx.betweenness_centrality(self.topology)
self.env._last_step = self.now
return self.env._betweenness[node]
if force or (not hasattr(self.model, '_betweenness')) or getattr(self.model, '_last_step', 0) < self.now:
self.model._betweenness = nx.betweenness_centrality(self.topology)
self.model._last_step = self.now
return self.model._betweenness[node]
def state(name=None):
@ -299,38 +291,31 @@ class MetaFSM(type):
class FSM(NetworkAgent, metaclass=MetaFSM):
def __init__(self, *args, **kwargs):
super(FSM, self).__init__(*args, **kwargs)
if 'id' not in self.state:
if not hasattr(self, 'state_id'):
if not self.default_state:
raise ValueError('No default state specified for {}'.format(self.id))
self['id'] = self.default_state.id
self._next_change = simpy.core.Infinity
self._next_state = self.state
raise ValueError('No default state specified for {}'.format(self.unique_id))
self.state_id = self.default_state.id
self.set_state(self.state_id)
def step(self):
if self._next_change < self.now:
next_state = self._next_state
self._next_change = simpy.core.Infinity
self['id'] = next_state
elif 'id' in self.state:
next_state = self['id']
elif self.default_state:
next_state = self.default_state.id
else:
raise Exception('{} has no valid state id or default state'.format(self))
if next_state not in self.states:
raise Exception('{} is not a valid id for {}'.format(next_state, self))
return self.states[next_state](self)
def next_state(self, state):
self._next_change = self.now
self._next_state = state
self.debug(f'Agent {self.unique_id} @ state {self.state_id}')
interval = super().step()
if 'id' not in self.state:
# if 'id' in self.state:
# self.set_state(self.state['id'])
if self.default_state:
self.set_state(self.default_state.id)
else:
raise Exception('{} has no valid state id or default state'.format(self))
return self.states[self.state_id](self) or interval
def set_state(self, state):
if hasattr(state, 'id'):
state = state.id
if state not in self.states:
raise ValueError('{} is not a valid state'.format(state))
self['id'] = state
self.state_id = state
return state
@ -349,9 +334,6 @@ def prob(prob=1):
return r < prob
STATIC_THRESHOLD = (-1, -1)
def calculate_distribution(network_agents=None,
agent_type=None):
'''
@ -379,7 +361,7 @@ def calculate_distribution(network_agents=None,
'agent_type_1'.
'''
if network_agents:
network_agents = deepcopy(network_agents)
network_agents = [deepcopy(agent) for agent in network_agents if not hasattr(agent, 'id')]
elif agent_type:
network_agents = [{'agent_type': agent_type}]
else:
@ -394,7 +376,6 @@ def calculate_distribution(network_agents=None,
acc = 0
for v in network_agents:
if 'ids' in v:
v['threshold'] = STATIC_THRESHOLD
continue
upper = acc + (v['weight']/total)
v['threshold'] = [acc, upper]
@ -409,7 +390,7 @@ def serialize_type(agent_type, known_modules=[], **kwargs):
return serialization.serialize(agent_type, known_modules=known_modules, **kwargs)[1] # Get the name of the class
def serialize_distribution(network_agents, known_modules=[]):
def serialize_definition(network_agents, known_modules=[]):
'''
When serializing an agent distribution, remove the thresholds, in order
to avoid cluttering the YAML definition file.
@ -431,7 +412,7 @@ def deserialize_type(agent_type, known_modules=[]):
return agent_type
def deserialize_distribution(ind, **kwargs):
def deserialize_definition(ind, **kwargs):
d = deepcopy(ind)
for v in d:
v['agent_type'] = deserialize_type(v['agent_type'], **kwargs)
@ -452,44 +433,84 @@ def _validate_states(states, topology):
def _convert_agent_types(ind, to_string=False, **kwargs):
'''Convenience method to allow specifying agents by class or class name.'''
if to_string:
return serialize_distribution(ind, **kwargs)
return deserialize_distribution(ind, **kwargs)
return serialize_definition(ind, **kwargs)
return deserialize_definition(ind, **kwargs)
def _agent_from_distribution(distribution, value=-1, agent_id=None):
def _agent_from_definition(definition, value=-1, unique_id=None):
"""Used in the initialization of agents given an agent distribution."""
if value < 0:
value = random.random()
for d in sorted(distribution, key=lambda x: x['threshold']):
threshold = d['threshold']
for d in sorted(definition, key=lambda x: x.get('threshold')):
threshold = d.get('threshold', (-1, -1))
# Check if the definition matches by id (first) or by threshold
if not ((agent_id is not None and threshold == STATIC_THRESHOLD and agent_id in d['ids']) or \
(value >= threshold[0] and value < threshold[1])):
continue
state = {}
if 'state' in d:
state = deepcopy(d['state'])
return d['agent_type'], state
if (unique_id is not None and unique_id in d.get('ids', [])) or \
(value >= threshold[0] and value < threshold[1]):
state = {}
if 'state' in d:
state = deepcopy(d['state'])
return d['agent_type'], state
raise Exception('Distribution for value {} not found in: {}'.format(value, distribution))
raise Exception('Definition for value {} not found in: {}'.format(value, definition))
class Geo(NetworkAgent):
'''In this type of network, nodes have a "pos" attribute.'''
def _definition_to_dict(definition, size=None, default_state=None):
state = default_state or {}
agents = {}
remaining = {}
if size:
for ix in range(size):
remaining[ix] = copy(state)
else:
remaining = defaultdict(lambda x: copy(state))
def geo_search(self, radius, node=None, center=False, **kwargs):
'''Get a list of nodes whose coordinates are closer than *radius* to *node*.'''
node = as_node(node if node is not None else self)
distro = sorted([item for item in definition if 'weight' in item])
G = self.subgraph(**kwargs)
ix = 0
def init_agent(item, id=ix):
while id in agents:
id += 1
pos = nx.get_node_attributes(G, 'pos')
if not pos:
return []
nodes, coords = list(zip(*pos.items()))
kdtree = KDTree(coords) # Cannot provide generator.
indices = kdtree.query_ball_point(pos[node], radius)
return [nodes[i] for i in indices if center or (nodes[i] != node)]
agent = remaining[id]
agent['state'].update(copy(item.get('state', {})))
agents[id] = agent
del remaining[id]
return agent
for item in definition:
if 'ids' in item:
ids = item['ids']
del item['ids']
for id in ids:
agent = init_agent(item, id)
for item in definition:
if 'number' in item:
times = item['number']
del item['number']
for times in range(times):
if size:
ix = random.choice(remaining.keys())
agent = init_agent(item, id)
else:
agent = init_agent(item)
if not size:
return agents
if len(remaining) < 0:
raise Exception('Invalid definition. Too many agents to add')
total_weight = float(sum(s['weight'] for s in distro))
unit = size / total_weight
for item in distro:
times = unit * item['weight']
del item['weight']
for times in range(times):
ix = random.choice(remaining.keys())
agent = init_agent(item, id)
return agents
def select(agents, state_id=None, agent_type=None, ignore=None, iterator=False, **kwargs):
@ -502,25 +523,22 @@ def select(agents, state_id=None, agent_type=None, ignore=None, iterator=False,
except TypeError:
agent_type = tuple([agent_type])
def matches_all(agent):
if state_id is not None:
if agent.state.get('id', None) not in state_id:
return False
if agent_type is not None:
if not isinstance(agent, agent_type):
return False
state = agent.state
for k, v in kwargs.items():
if state.get(k, None) != v:
return False
return True
f = agents
f = filter(matches_all, agents)
if ignore:
f = filter(lambda x: x not in ignore, f)
if state_id is not None:
f = filter(lambda agent: agent.get('state_id', None) in state_id, f)
if agent_type is not None:
f = filter(lambda agent: isinstance(agent, agent_type), f)
for k, v in kwargs.items():
f = filter(lambda agent: agent.state.get(k, None) == v, f)
if iterator:
return f
return list(f)
return f
from .BassModel import *
@ -530,3 +548,10 @@ from .ModelM2 import *
from .SentimentCorrelationModel import *
from .SISaModel import *
from .CounterModel import *
try:
import scipy
from .Geo import Geo
except ImportError:
import sys
print('Could not load the Geo Agent, scipy is not installed', file=sys.stderr)

View File

@ -4,7 +4,8 @@ import glob
import yaml
from os.path import join
from . import serialization, history
from . import serialization
from tsih import History
def read_data(*args, group=False, **kwargs):
@ -34,7 +35,7 @@ def _read_data(pattern, *args, from_csv=False, process_args=None, **kwargs):
def read_sql(db, *args, **kwargs):
h = history.History(db_path=db, backup=False, readonly=True)
h = History(db_path=db, backup=False, readonly=True)
df = h.read_sql(*args, **kwargs)
return df
@ -61,7 +62,12 @@ def convert_row(row):
def convert_types_slow(df):
'''This is a slow operation.'''
'''
Go over every column in a dataframe and convert it to the type determined by the `get_types`
function.
This is a slow operation.
'''
dtypes = get_types(df)
for k, v in dtypes.items():
t = df[df['key']==k]
@ -102,6 +108,9 @@ def process(df, **kwargs):
def get_types(df):
'''
Get the value type for every key stored in a raw history dataframe.
'''
dtypes = df.groupby(by=['key'])['value_type'].unique()
return {k:v[0] for k,v in dtypes.iteritems()}
@ -126,8 +135,14 @@ def process_one(df, *keys, columns=['key', 'agent_id'], values='value',
def get_count(df, *keys):
'''
For every t_step and key, get the value count.
The result is a dataframe with `t_step` as index, an a multiindex column based on `key` and the values found for each `key`.
'''
if keys:
df = df[list(keys)]
df.columns = df.columns.remove_unused_levels()
counts = pd.DataFrame()
for key in df.columns.levels[0]:
g = df[[key]].apply(pd.Series.value_counts, axis=1).fillna(0)
@ -137,10 +152,25 @@ def get_count(df, *keys):
return counts
def get_majority(df, *keys):
'''
For every t_step and key, get the value of the majority of agents
The result is a dataframe with `t_step` as index, and columns based on `key`.
'''
df = get_count(df, *keys)
return df.stack(level=0).idxmax(axis=1).unstack()
def get_value(df, *keys, aggfunc='sum'):
'''
For every t_step and key, get the value of *numeric columns*, aggregated using a specific function.
'''
if keys:
df = df[list(keys)]
return df.groupby(axis=1, level=0).agg(aggfunc)
df.columns = df.columns.remove_unused_levels()
df = df.select_dtypes('number')
return df.groupby(level='key', axis=1).agg(aggfunc)
def plot_all(*args, plot_args={}, **kwargs):

26
soil/datacollection.py Normal file
View File

@ -0,0 +1,26 @@
from mesa import DataCollector as MDC
class SoilDataCollector(MDC):
def __init__(self, environment, *args, **kwargs):
super().__init__(*args, **kwargs)
# Populate model and env reporters so they have a key per
# So they can be shown in the web interface
self.environment = environment
@property
def model_vars(self):
pass
@model_vars.setter
def model_vars(self, value):
pass
@property
def agent_reporters(self):
self.model._history._
pass

View File

@ -1,28 +1,31 @@
import os
import sqlite3
import time
import csv
import math
import random
import simpy
import yaml
import tempfile
import pandas as pd
from time import time as current_time
from copy import deepcopy
from networkx.readwrite import json_graph
import networkx as nx
import simpy
from . import serialization, agents, analysis, history, utils
from tsih import History, Record, Key, NoHistory
from mesa import Model
from . import serialization, agents, analysis, utils, time
# These properties will be copied when pickling/unpickling the environment
_CONFIG_PROPS = [ 'name',
'states',
'default_state',
'interval',
'states',
'default_state',
'interval',
]
class Environment(simpy.Environment):
class Environment(Model):
"""
The environment is key in a simulation. It contains the network topology,
a reference to network and environment agents, as well as the environment
@ -39,35 +42,70 @@ class Environment(simpy.Environment):
states=None,
default_state=None,
interval=1,
network_params=None,
seed=None,
topology=None,
schedule=None,
initial_time=0,
**environment_params):
environment_params=None,
history=True,
dir_path=None,
**kwargs):
super().__init__()
self.schedule = schedule
if schedule is None:
self.schedule = time.TimedActivation()
self.name = name or 'UnnamedEnvironment'
seed = seed or time.time()
seed = seed or current_time()
random.seed(seed)
if isinstance(states, list):
states = dict(enumerate(states))
self.states = deepcopy(states) if states else {}
self.default_state = deepcopy(default_state) or {}
if topology is None:
network_params = network_params or {}
topology = serialization.load_network(network_params,
dir_path=dir_path)
if not topology:
topology = nx.Graph()
self.G = nx.Graph(topology)
super().__init__(initial_time=initial_time)
self.environment_params = environment_params
self.environment_params = environment_params or {}
self.environment_params.update(kwargs)
self._env_agents = {}
self.interval = interval
self._history = history.History(name=self.name,
backup=True)
if history:
history = History
else:
history = NoHistory
self._history = history(name=self.name,
backup=True)
self['SEED'] = seed
# Add environment agents first, so their events get
# executed before network agents
self.environment_agents = environment_agents or []
self.network_agents = network_agents or []
if network_agents:
distro = agents.calculate_distribution(network_agents)
self.network_agents = agents._convert_agent_types(distro)
else:
self.network_agents = []
environment_agents = environment_agents or []
if environment_agents:
distro = agents.calculate_distribution(environment_agents)
environment_agents = agents._convert_agent_types(distro)
self.environment_agents = environment_agents
@property
def now(self):
if self.schedule:
return self.schedule.time
raise Exception('The environment has not been scheduled, so it has no sense of time')
@property
def agents(self):
@ -81,15 +119,9 @@ class Environment(simpy.Environment):
@environment_agents.setter
def environment_agents(self, environment_agents):
# Set up environmental agent
self._env_agents = {}
for item in environment_agents:
kwargs = deepcopy(item)
atype = kwargs.pop('agent_type')
kwargs['agent_id'] = kwargs.get('agent_id', atype.__name__)
kwargs['state'] = kwargs.get('state', {})
a = atype(environment=self, **kwargs)
self._env_agents[a.id] = a
self._environment_agents = environment_agents
self._env_agents = agents._definition_to_dict(definition=environment_agents)
@property
def network_agents(self):
@ -102,9 +134,9 @@ class Environment(simpy.Environment):
def network_agents(self, network_agents):
self._network_agents = network_agents
for ix in self.G.nodes():
self.init_agent(ix, agent_distribution=network_agents)
self.init_agent(ix, agent_definitions=network_agents)
def init_agent(self, agent_id, agent_distribution):
def init_agent(self, agent_id, agent_definitions):
node = self.G.nodes[agent_id]
init = False
state = dict(node)
@ -119,8 +151,8 @@ class Environment(simpy.Environment):
if agent_type:
agent_type = agents.deserialize_type(agent_type)
elif agent_distribution:
agent_type, state = agents._agent_from_distribution(agent_distribution, agent_id=agent_id)
elif agent_definitions:
agent_type, state = agents._agent_from_definition(agent_definitions, unique_id=agent_id)
else:
serialization.logger.debug('Skipping node {}'.format(agent_id))
return
@ -136,10 +168,18 @@ class Environment(simpy.Environment):
a = None
if agent_type:
state = defstate
a = agent_type(environment=self,
agent_id=agent_id,
state=state)
a = agent_type(model=self,
unique_id=agent_id)
for (k, v) in getattr(a, 'defaults', {}).items():
if not hasattr(a, k) or getattr(a, k) is None:
setattr(a, k, v)
for (k, v) in state.items():
setattr(a, k, v)
node['agent'] = a
self.schedule.add(a)
return a
def add_node(self, agent_type, state=None):
@ -157,32 +197,23 @@ class Environment(simpy.Environment):
start = start or self.now
return self.G.add_edge(agent1, agent2, **attrs)
def step(self):
super().step()
self.datacollector.collect(self)
self.schedule.step()
def run(self, until, *args, **kwargs):
self._save_state()
super().run(until, *args, **kwargs)
while self.schedule.next_time <= until and not math.isinf(self.schedule.next_time):
self.schedule.step(until=until)
utils.logger.debug(f'Simulation step {self.schedule.time}/{until}. Next: {self.schedule.next_time}')
self._history.flush_cache()
def _save_state(self, now=None):
serialization.logger.debug('Saving state @{}'.format(self.now))
self._history.save_records(self.state_to_tuples(now=now))
def save_state(self):
'''
:DEPRECATED:
Periodically save the state of the environment and the agents.
'''
self._save_state()
while self.peek() != simpy.core.Infinity:
delay = max(self.peek() - self.now, self.interval)
serialization.logger.debug('Step: {}'.format(self.now))
ev = self.event()
ev._ok = True
# Schedule the event with minimum priority so
# that it executes before all agents
self.schedule(ev, -999, delay)
yield ev
self._save_state()
def __getitem__(self, key):
if isinstance(key, tuple):
self._history.flush_cache()
@ -192,12 +223,12 @@ class Environment(simpy.Environment):
def __setitem__(self, key, value):
if isinstance(key, tuple):
k = history.Key(*key)
k = Key(*key)
self._history.save_record(*k,
value=value)
return
self.environment_params[key] = value
self._history.save_record(agent_id='env',
self._history.save_record(dict_id='env',
t_step=self.now,
key=key,
value=value)
@ -221,8 +252,8 @@ class Environment(simpy.Environment):
def get_agents(self, nodes=None):
if nodes is None:
return list(self.agents)
return [self.G.nodes[i]['agent'] for i in nodes]
return self.agents
return (self.G.nodes[i]['agent'] for i in nodes)
def dump_csv(self, f):
with utils.open_or_reuse(f, 'w') as f:
@ -262,16 +293,16 @@ class Environment(simpy.Environment):
if now is None:
now = self.now
for k, v in self.environment_params.items():
yield history.Record(agent_id='env',
t_step=now,
key=k,
value=v)
yield Record(dict_id='env',
t_step=now,
key=k,
value=v)
for agent in self.agents:
for k, v in agent.state.items():
yield history.Record(agent_id=agent.id,
t_step=now,
key=k,
value=v)
yield Record(dict_id=agent.id,
t_step=now,
key=k,
value=v)
def history_to_tuples(self):
return self._history.to_tuples()
@ -329,7 +360,7 @@ class Environment(simpy.Environment):
state['G'] = json_graph.node_link_data(self.G)
state['environment_agents'] = self._env_agents
state['history'] = self._history
state['_now'] = self._now
state['schedule'] = self.schedule
return state
def __setstate__(self, state):
@ -338,7 +369,8 @@ class Environment(simpy.Environment):
self._env_agents = state['environment_agents']
self.G = json_graph.node_link_graph(state['G'])
self._history = state['history']
self._now = state['_now']
# self._env = None
self.schedule = state['schedule']
self._queue = []

View File

@ -1,385 +0,0 @@
import time
import os
import pandas as pd
import sqlite3
import copy
import logging
import tempfile
logger = logging.getLogger(__name__)
from collections import UserDict, namedtuple
from . import serialization
from .utils import open_or_reuse, unflatten_dict
class History:
"""
Store and retrieve values from a sqlite database.
"""
def __init__(self, name=None, db_path=None, backup=False, readonly=False):
if readonly and (not os.path.exists(db_path)):
raise Exception('The DB file does not exist. Cannot open in read-only mode')
self._db = None
self._temp = db_path is None
self._stats_columns = None
self.readonly = readonly
if self._temp:
if not name:
name = time.time()
# The file will be deleted as soon as it's closed
# Normally, that will be on destruction
db_path = tempfile.NamedTemporaryFile(suffix='{}.sqlite'.format(name)).name
if backup and os.path.exists(db_path):
newname = db_path + '.backup{}.sqlite'.format(time.time())
os.rename(db_path, newname)
self.db_path = db_path
self.db = db_path
self._dtypes = {}
self._tups = []
if self.readonly:
return
with self.db:
logger.debug('Creating database {}'.format(self.db_path))
self.db.execute('''CREATE TABLE IF NOT EXISTS history (agent_id text, t_step int, key text, value text)''')
self.db.execute('''CREATE TABLE IF NOT EXISTS value_types (key text, value_type text)''')
self.db.execute('''CREATE TABLE IF NOT EXISTS stats (trial_id text)''')
self.db.execute('''CREATE UNIQUE INDEX IF NOT EXISTS idx_history ON history (agent_id, t_step, key);''')
@property
def db(self):
try:
self._db.cursor()
except (sqlite3.ProgrammingError, AttributeError):
self.db = None # Reset the database
return self._db
@db.setter
def db(self, db_path=None):
self._close()
db_path = db_path or self.db_path
if isinstance(db_path, str):
logger.debug('Connecting to database {}'.format(db_path))
self._db = sqlite3.connect(db_path)
self._db.row_factory = sqlite3.Row
else:
self._db = db_path
def _close(self):
if self._db is None:
return
self.flush_cache()
self._db.close()
self._db = None
def save_stats(self, stat):
if self.readonly:
print('DB in readonly mode')
return
if not stat:
return
with self.db:
if not self._stats_columns:
self._stats_columns = list(c['name'] for c in self.db.execute('PRAGMA table_info(stats)'))
for column, value in stat.items():
if column in self._stats_columns:
continue
dtype = 'text'
if not isinstance(value, str):
try:
float(value)
dtype = 'real'
int(value)
dtype = 'int'
except ValueError:
pass
self.db.execute('ALTER TABLE stats ADD "{}" "{}"'.format(column, dtype))
self._stats_columns.append(column)
columns = ", ".join(map(lambda x: '"{}"'.format(x), stat.keys()))
values = ", ".join(['"{0}"'.format(col) for col in stat.values()])
query = "INSERT INTO stats ({columns}) VALUES ({values})".format(
columns=columns,
values=values
)
self.db.execute(query)
def get_stats(self, unflatten=True):
rows = self.db.execute("select * from stats").fetchall()
res = []
for row in rows:
d = {}
for k in row.keys():
if row[k] is None:
continue
d[k] = row[k]
if unflatten:
d = unflatten_dict(d)
res.append(d)
return res
@property
def dtypes(self):
self._read_types()
return {k:v[0] for k, v in self._dtypes.items()}
def save_tuples(self, tuples):
'''
Save a series of tuples, converting them to records if necessary
'''
self.save_records(Record(*tup) for tup in tuples)
def save_records(self, records):
'''
Save a collection of records
'''
for record in records:
if not isinstance(record, Record):
record = Record(*record)
self.save_record(*record)
def save_record(self, agent_id, t_step, key, value):
'''
Save a collection of records to the database.
Database writes are cached.
'''
if self.readonly:
raise Exception('DB in readonly mode')
if key not in self._dtypes:
self._read_types()
if key not in self._dtypes:
name = serialization.name(value)
serializer = serialization.serializer(name)
deserializer = serialization.deserializer(name)
self._dtypes[key] = (name, serializer, deserializer)
with self.db:
self.db.execute("replace into value_types (key, value_type) values (?, ?)", (key, name))
value = self._dtypes[key][1](value)
self._tups.append(Record(agent_id=agent_id,
t_step=t_step,
key=key,
value=value))
if len(self._tups) > 100:
self.flush_cache()
def flush_cache(self):
'''
Use a cache to save state changes to avoid opening a session for every change.
The cache will be flushed at the end of the simulation, and when history is accessed.
'''
if self.readonly:
raise Exception('DB in readonly mode')
logger.debug('Flushing cache {}'.format(self.db_path))
with self.db:
for rec in self._tups:
self.db.execute("replace into history(agent_id, t_step, key, value) values (?, ?, ?, ?)", (rec.agent_id, rec.t_step, rec.key, rec.value))
self._tups = list()
def to_tuples(self):
self.flush_cache()
with self.db:
res = self.db.execute("select agent_id, t_step, key, value from history ").fetchall()
for r in res:
agent_id, t_step, key, value = r
if key not in self._dtypes:
self._read_types()
if key not in self._dtypes:
raise ValueError("Unknown datatype for {} and {}".format(key, value))
value = self._dtypes[key][2](value)
yield agent_id, t_step, key, value
def _read_types(self):
with self.db:
res = self.db.execute("select key, value_type from value_types ").fetchall()
for k, v in res:
serializer = serialization.serializer(v)
deserializer = serialization.deserializer(v)
self._dtypes[k] = (v, serializer, deserializer)
def __getitem__(self, key):
self.flush_cache()
key = Key(*key)
agent_ids = [key.agent_id] if key.agent_id is not None else []
t_steps = [key.t_step] if key.t_step is not None else []
keys = [key.key] if key.key is not None else []
df = self.read_sql(agent_ids=agent_ids,
t_steps=t_steps,
keys=keys)
r = Records(df, filter=key, dtypes=self._dtypes)
if r.resolved:
return r.value()
return r
def read_sql(self, keys=None, agent_ids=None, t_steps=None, convert_types=False, limit=-1):
self._read_types()
def escape_and_join(v):
if v is None:
return
return ",".join(map(lambda x: "\'{}\'".format(x), v))
filters = [("key in ({})".format(escape_and_join(keys)), keys),
("agent_id in ({})".format(escape_and_join(agent_ids)), agent_ids)
]
filters = list(k[0] for k in filters if k[1])
last_df = None
if t_steps:
# Convert negative indices into positive
if any(x<0 for x in t_steps):
max_t = int(self.db.execute("select max(t_step) from history").fetchone()[0])
t_steps = [t if t>0 else max_t+1+t for t in t_steps]
# We will be doing ffill interpolation, so we need to look for
# the last value before the minimum step in the query
min_step = min(t_steps)
last_filters = ['t_step < {}'.format(min_step),]
last_filters = last_filters + filters
condition = ' and '.join(last_filters)
last_query = '''
select h1.*
from history h1
inner join (
select agent_id, key, max(t_step) as t_step
from history
where {condition}
group by agent_id, key
) h2
on h1.agent_id = h2.agent_id and
h1.key = h2.key and
h1.t_step = h2.t_step
'''.format(condition=condition)
last_df = pd.read_sql_query(last_query, self.db)
filters.append("t_step >= '{}' and t_step <= '{}'".format(min_step, max(t_steps)))
condition = ''
if filters:
condition = 'where {} '.format(' and '.join(filters))
query = 'select * from history {} limit {}'.format(condition, limit)
df = pd.read_sql_query(query, self.db)
if last_df is not None:
df = pd.concat([df, last_df])
df_p = df.pivot_table(values='value', index=['t_step'],
columns=['key', 'agent_id'],
aggfunc='first')
for k, v in self._dtypes.items():
if k in df_p:
dtype, _, deserial = v
try:
df_p[k] = df_p[k].fillna(method='ffill').astype(dtype)
except (TypeError, ValueError):
# Avoid forward-filling unknown/incompatible types
continue
if t_steps:
df_p = df_p.reindex(t_steps, method='ffill')
return df_p.ffill()
def __getstate__(self):
state = dict(**self.__dict__)
del state['_db']
del state['_dtypes']
return state
def __setstate__(self, state):
self.__dict__ = state
self._dtypes = {}
self._db = None
def dump(self, f):
self._close()
for line in open_or_reuse(self.db_path, 'rb'):
f.write(line)
class Records():
def __init__(self, df, filter=None, dtypes=None):
if not filter:
filter = Key(agent_id=None,
t_step=None,
key=None)
self._df = df
self._filter = filter
self.dtypes = dtypes or {}
super().__init__()
def mask(self, tup):
res = ()
for i, k in zip(tup[:-1], self._filter):
if k is None:
res = res + (i,)
res = res + (tup[-1],)
return res
def filter(self, newKey):
f = list(self._filter)
for ix, i in enumerate(f):
if i is None:
f[ix] = newKey
self._filter = Key(*f)
@property
def resolved(self):
return sum(1 for i in self._filter if i is not None) == 3
def __iter__(self):
for column, series in self._df.iteritems():
key, agent_id = column
for t_step, value in series.iteritems():
r = Record(t_step=t_step,
agent_id=agent_id,
key=key,
value=value)
yield self.mask(r)
def value(self):
if self.resolved:
f = self._filter
try:
i = self._df[f.key][str(f.agent_id)]
ix = i.index.get_loc(f.t_step, method='ffill')
return i.iloc[ix]
except KeyError as ex:
return self.dtypes[f.key][2]()
return list(self)
def df(self):
return self._df
def __getitem__(self, k):
n = copy.copy(self)
n.filter(k)
if n.resolved:
return n.value()
return n
def __len__(self):
return len(self._df)
def __str__(self):
if self.resolved:
return str(self.value())
return '<Records for [{}]>'.format(self._filter)
Key = namedtuple('Key', ['agent_id', 't_step', 'key'])
Record = namedtuple('Record', 'agent_id t_step key value')
Stat = namedtuple('Stat', 'trial_id')

View File

@ -13,7 +13,6 @@ from jinja2 import Template
logger = logging.getLogger('soil')
logger.setLevel(logging.INFO)
def load_network(network_params, dir_path=None):
@ -51,6 +50,9 @@ def load_network(network_params, dir_path=None):
def load_file(infile):
folder = os.path.dirname(infile)
if folder not in sys.path:
sys.path.append(folder)
with open(infile, 'r') as f:
return list(chain.from_iterable(map(expand_template, load_string(f))))

View File

@ -9,6 +9,7 @@ import networkx as nx
from networkx.readwrite import json_graph
from multiprocessing import Pool
from functools import partial
from tsih import History
import pickle
@ -17,7 +18,6 @@ from .environment import Environment
from .utils import logger
from .exporters import default
from .stats import defaultStats
from .history import History
#TODO: change documentation for simulation
@ -143,7 +143,7 @@ class Simulation:
return list(self.run_gen(*args, **kwargs))
def _run_sync_or_async(self, parallel=False, *args, **kwargs):
if parallel:
if parallel and not os.environ.get('SENPY_DEBUG', None):
p = Pool()
func = partial(self.run_trial_exceptions,
*args,
@ -159,7 +159,7 @@ class Simulation:
**kwargs)
def run_gen(self, *args, parallel=False, dry_run=False,
exporters=[default, ], stats=[defaultStats], outdir=None, exporter_params={},
exporters=[default, ], stats=[], outdir=None, exporter_params={},
stats_params={}, log_level=None,
**kwargs):
'''Run the simulation and yield the resulting environments.'''
@ -226,12 +226,14 @@ class Simulation:
opts.update({
'name': trial_id,
'topology': self.topology.copy(),
'network_params': self.network_params,
'seed': '{}_trial_{}'.format(self.seed, trial_id),
'initial_time': 0,
'interval': self.interval,
'network_agents': self.network_agents,
'initial_time': 0,
'states': self.states,
'dir_path': self.dir_path,
'default_state': self.default_state,
'environment_agents': self.environment_agents,
})
@ -304,10 +306,10 @@ class Simulation:
if k[0] != '_':
state[k] = v
state['topology'] = json_graph.node_link_data(self.topology)
state['network_agents'] = agents.serialize_distribution(self.network_agents,
known_modules = [])
state['environment_agents'] = agents.serialize_distribution(self.environment_agents,
known_modules = [])
state['network_agents'] = agents.serialize_definition(self.network_agents,
known_modules = [])
state['environment_agents'] = agents.serialize_definition(self.environment_agents,
known_modules = [])
state['environment_class'] = serialization.serialize(self.environment_class,
known_modules=['soil.environment'])[1] # func, name
if state['load_module'] is None:
@ -325,7 +327,6 @@ class Simulation:
known_modules=[self.load_module])
self.environment_class = serialization.deserialize(self.environment_class,
known_modules=[self.load_module, 'soil.environment', ]) # func, name
return state
def all_from_config(config):

View File

@ -97,7 +97,7 @@ class defaultStats(Stats):
return {
'network ': {
'n_nodes': env.G.number_of_nodes(),
'n_edges': env.G.number_of_nodes(),
'n_edges': env.G.number_of_edges(),
},
'agents': {
'model_count': dict(c),

87
soil/time.py Normal file
View File

@ -0,0 +1,87 @@
from mesa.time import BaseScheduler
from queue import Empty
from heapq import heappush, heappop
import math
from .utils import logger
from mesa import Agent
class When:
def __init__(self, time):
self._time = float(time)
def abs(self, time):
return self._time
class Delta:
def __init__(self, delta):
self._delta = delta
def __eq__(self, other):
return self._delta == other._delta
def abs(self, time):
return time + self._delta
class TimedActivation(BaseScheduler):
"""A scheduler which activates each agent when the agent requests.
In each activation, each agent will update its 'next_time'.
"""
def __init__(self, *args, **kwargs):
super().__init__(self)
self._queue = []
self.next_time = 0
def add(self, agent: Agent):
if agent.unique_id not in self._agents:
heappush(self._queue, (self.time, agent.unique_id))
super().add(agent)
def step(self, until: float =float('inf')) -> None:
"""
Executes agents in order, one at a time. After each step,
an agent will signal when it wants to be scheduled next.
"""
when = None
agent_id = None
unsched = []
until = until or float('inf')
if not self._queue:
self.time = until
self.next_time = float('inf')
return
(when, agent_id) = self._queue[0]
if until and when > until:
self.time = until
self.next_time = when
return
self.time = when
next_time = float("inf")
while when == self.time:
heappop(self._queue)
logger.debug(f'Stepping agent {agent_id}')
when = (self._agents[agent_id].step() or Delta(1)).abs(self.time)
heappush(self._queue, (when, agent_id))
if when < next_time:
next_time = when
if not self._queue or self._queue[0][0] > self.time:
agent_id = None
break
else:
(when, agent_id) = self._queue[0]
if when and when < self.time:
raise Exception("Invalid scheduling time")
self.next_time = next_time
self.steps += 1

View File

@ -7,8 +7,8 @@ from shutil import copyfile
from contextlib import contextmanager
logger = logging.getLogger('soil')
logging.basicConfig()
logger.setLevel(logging.INFO)
# logging.basicConfig()
# logger.setLevel(logging.INFO)
@contextmanager
@ -26,6 +26,8 @@ def timer(name='task', pre="", function=logger.info, to_object=None):
to_object.end = end
def safe_open(path, mode='r', backup=True, **kwargs):
outdir = os.path.dirname(path)
if outdir and not os.path.exists(outdir):

5
soil/visualization.py Normal file
View File

@ -0,0 +1,5 @@
from mesa.visualization.UserParam import UserSettableParameter
class UserSettableParameter(UserSettableParameter):
def __str__(self):
return self.value

View File

@ -1 +1,4 @@
pytest
pytest
mesa>=0.8.9
scipy>=1.3
tornado

View File

@ -21,11 +21,13 @@ class Ping(agents.FSM):
@agents.default_state
@agents.state
def even(self):
self.debug(f'Even {self["count"]}')
self['count'] += 1
return self.odd
@agents.state
def odd(self):
self.debug(f'Odd {self["count"]}')
self['count'] += 1
return self.even
@ -65,13 +67,13 @@ class TestAnalysis(TestCase):
def test_count(self):
env = self.env
df = analysis.read_sql(env._history.db_path)
res = analysis.get_count(df, 'SEED', 'id')
res = analysis.get_count(df, 'SEED', 'state_id')
assert res['SEED'][self.env['SEED']].iloc[0] == 1
assert res['SEED'][self.env['SEED']].iloc[-1] == 1
assert res['id']['odd'].iloc[0] == 2
assert res['id']['even'].iloc[0] == 0
assert res['id']['odd'].iloc[-1] == 1
assert res['id']['even'].iloc[-1] == 1
assert res['state_id']['odd'].iloc[0] == 2
assert res['state_id']['even'].iloc[0] == 0
assert res['state_id']['odd'].iloc[-1] == 1
assert res['state_id']['even'].iloc[-1] == 1
def test_value(self):
env = self.env
@ -82,8 +84,7 @@ class TestAnalysis(TestCase):
import numpy as np
res_mean = analysis.get_value(df, 'count', aggfunc=np.mean)
assert res_mean['count'].iloc[0] == 1
res_total = analysis.get_value(df)
assert res_mean['count'].iloc[15] == (16+8)/2
res_total = analysis.get_majority(df)
res_total['SEED'].iloc[0] == self.env['SEED']

View File

@ -1,203 +0,0 @@
from unittest import TestCase
import os
import shutil
from glob import glob
from soil import history
from soil import utils
ROOT = os.path.abspath(os.path.dirname(__file__))
DBROOT = os.path.join(ROOT, 'testdb')
class TestHistory(TestCase):
def setUp(self):
if not os.path.exists(DBROOT):
os.makedirs(DBROOT)
def tearDown(self):
if os.path.exists(DBROOT):
shutil.rmtree(DBROOT)
def test_history(self):
"""
"""
tuples = (
('a_0', 0, 'id', 'h'),
('a_0', 1, 'id', 'e'),
('a_0', 2, 'id', 'l'),
('a_0', 3, 'id', 'l'),
('a_0', 4, 'id', 'o'),
('a_1', 0, 'id', 'v'),
('a_1', 1, 'id', 'a'),
('a_1', 2, 'id', 'l'),
('a_1', 3, 'id', 'u'),
('a_1', 4, 'id', 'e'),
('env', 1, 'prob', 1),
('env', 3, 'prob', 2),
('env', 5, 'prob', 3),
('a_2', 7, 'finished', True),
)
h = history.History()
h.save_tuples(tuples)
# assert h['env', 0, 'prob'] == 0
for i in range(1, 7):
assert h['env', i, 'prob'] == ((i-1)//2)+1
for i, k in zip(range(5), 'hello'):
assert h['a_0', i, 'id'] == k
for record, value in zip(h['a_0', None, 'id'], 'hello'):
t_step, val = record
assert val == value
for i, k in zip(range(5), 'value'):
assert h['a_1', i, 'id'] == k
for i in range(5, 8):
assert h['a_1', i, 'id'] == 'e'
for i in range(7):
assert h['a_2', i, 'finished'] == False
assert h['a_2', 7, 'finished']
def test_history_gen(self):
"""
"""
tuples = (
('a_1', 0, 'id', 'v'),
('a_1', 1, 'id', 'a'),
('a_1', 2, 'id', 'l'),
('a_1', 3, 'id', 'u'),
('a_1', 4, 'id', 'e'),
('env', 1, 'prob', 1),
('env', 2, 'prob', 2),
('env', 3, 'prob', 3),
('a_2', 7, 'finished', True),
)
h = history.History()
h.save_tuples(tuples)
for t_step, key, value in h['env', None, None]:
assert t_step == value
assert key == 'prob'
records = list(h[None, 7, None])
assert len(records) == 3
for i in records:
agent_id, key, value = i
if agent_id == 'a_1':
assert key == 'id'
assert value == 'e'
elif agent_id == 'a_2':
assert key == 'finished'
assert value
else:
assert key == 'prob'
assert value == 3
records = h['a_1', 7, None]
assert records['id'] == 'e'
def test_history_file(self):
"""
History should be saved to a file
"""
tuples = (
('a_1', 0, 'id', 'v'),
('a_1', 1, 'id', 'a'),
('a_1', 2, 'id', 'l'),
('a_1', 3, 'id', 'u'),
('a_1', 4, 'id', 'e'),
('env', 1, 'prob', 1),
('env', 2, 'prob', 2),
('env', 3, 'prob', 3),
('a_2', 7, 'finished', True),
)
db_path = os.path.join(DBROOT, 'test')
h = history.History(db_path=db_path)
h.save_tuples(tuples)
h.flush_cache()
assert os.path.exists(db_path)
# Recover the data
recovered = history.History(db_path=db_path)
assert recovered['a_1', 0, 'id'] == 'v'
assert recovered['a_1', 4, 'id'] == 'e'
# Using backup=True should create a backup copy, and initialize an empty history
newhistory = history.History(db_path=db_path, backup=True)
backuppaths = glob(db_path + '.backup*.sqlite')
assert len(backuppaths) == 1
backuppath = backuppaths[0]
assert newhistory.db_path == h.db_path
assert os.path.exists(backuppath)
assert len(newhistory[None, None, None]) == 0
def test_history_tuples(self):
"""
The data recovered should be equal to the one recorded.
"""
tuples = (
('a_1', 0, 'id', 'v'),
('a_1', 1, 'id', 'a'),
('a_1', 2, 'id', 'l'),
('a_1', 3, 'id', 'u'),
('a_1', 4, 'id', 'e'),
('env', 1, 'prob', 1),
('env', 2, 'prob', 2),
('env', 3, 'prob', 3),
('a_2', 7, 'finished', True),
)
h = history.History()
h.save_tuples(tuples)
recovered = list(h.to_tuples())
assert recovered
for i in recovered:
assert i in tuples
def test_stats(self):
"""
The data recovered should be equal to the one recorded.
"""
tuples = (
('a_1', 0, 'id', 'v'),
('a_1', 1, 'id', 'a'),
('a_1', 2, 'id', 'l'),
('a_1', 3, 'id', 'u'),
('a_1', 4, 'id', 'e'),
('env', 1, 'prob', 1),
('env', 2, 'prob', 2),
('env', 3, 'prob', 3),
('a_2', 7, 'finished', True),
)
stat_tuples = [
{'num_infected': 5, 'runtime': 0.2},
{'num_infected': 5, 'runtime': 0.2},
{'new': '40'},
]
h = history.History()
h.save_tuples(tuples)
for stat in stat_tuples:
h.save_stats(stat)
recovered = h.get_stats()
assert recovered
assert recovered[0]['num_infected'] == 5
assert recovered[1]['runtime'] == 0.2
assert recovered[2]['new'] == '40'
def test_unflatten(self):
ex = {'count.neighbors.3': 4,
'count.times.2': 4,
'count.total.4': 4,
'mean.neighbors': 3,
'mean.times': 2,
'mean.total': 4,
't_step': 2,
'trial_id': 'exporter_sim_trial_1605817956-4475424'}
res = utils.unflatten_dict(ex)
assert 'count' in res
assert 'mean' in res
assert 't_step' in res
assert 'trial_id' in res

View File

@ -9,7 +9,8 @@ from functools import partial
from os.path import join
from soil import (simulation, Environment, agents, serialization,
history, utils)
utils)
from soil.time import Delta
ROOT = os.path.abspath(os.path.dirname(__file__))
@ -20,8 +21,8 @@ class CustomAgent(agents.FSM):
@agents.default_state
@agents.state
def normal(self):
self.state['neighbors'] = self.count_agents(state_id='normal',
limit_neighbors=True)
self.neighbors = self.count_agents(state_id='normal',
limit_neighbors=True)
@agents.state
def unreachable(self):
return
@ -115,7 +116,7 @@ class TestMain(TestCase):
'network_agents': [{
'agent_type': 'AggregatedCounter',
'weight': 1,
'state': {'id': 0}
'state': {'state_id': 0}
}],
'max_time': 10,
@ -126,7 +127,7 @@ class TestMain(TestCase):
env = s.run_simulation(dry_run=True)[0]
for agent in env.network_agents:
last = 0
assert len(agent[None, None]) == 10
assert len(agent[None, None]) == 11
for step, total in sorted(agent['total', None]):
assert total == last + 2
last = total
@ -148,10 +149,9 @@ class TestMain(TestCase):
}
s = simulation.from_config(config)
env = s.run_simulation(dry_run=True)[0]
assert env.get_agent(0).state['neighbors'] == 1
assert env.get_agent(0).state['neighbors'] == 1
assert env.get_agent(1).count_agents(state_id='normal') == 2
assert env.get_agent(1).count_agents(state_id='normal', limit_neighbors=True) == 1
assert env.get_agent(0).neighbors == 1
def test_torvalds_example(self):
"""A complete example from a documentation should work."""
@ -198,11 +198,11 @@ class TestMain(TestCase):
"""
config = serialization.load_file(join(EXAMPLES, 'complete.yml'))[0]
s = simulation.from_config(config)
for i in range(5):
s.run_simulation(dry_run=True)
nconfig = s.to_dict()
del nconfig['topology']
assert config == nconfig
s.run_simulation(dry_run=True)
nconfig = s.to_dict()
del nconfig['topology']
assert config == nconfig
def test_row_conversion(self):
env = Environment()
@ -211,7 +211,7 @@ class TestMain(TestCase):
res = list(env.history_to_tuples())
assert len(res) == len(env.environment_params)
env._now = 1
env.schedule.time = 1
env['test'] = 'second_value'
res = list(env.history_to_tuples())
@ -281,7 +281,7 @@ class TestMain(TestCase):
'weight': 2
},
]
converted = agents.deserialize_distribution(agent_distro)
converted = agents.deserialize_definition(agent_distro)
assert converted[0]['agent_type'] == agents.CounterModel
assert converted[1]['agent_type'] == CustomAgent
pickle.dumps(converted)
@ -297,14 +297,14 @@ class TestMain(TestCase):
'weight': 2
},
]
converted = agents.serialize_distribution(agent_distro)
converted = agents.serialize_definition(agent_distro)
assert converted[0]['agent_type'] == 'CounterModel'
assert converted[1]['agent_type'] == 'test_main.CustomAgent'
pickle.dumps(converted)
def test_pickle_agent_environment(self):
env = Environment(name='Test')
a = agents.BaseAgent(environment=env, agent_id=25)
a = agents.BaseAgent(model=env, unique_id=25)
a['key'] = 'test'
@ -316,12 +316,6 @@ class TestMain(TestCase):
assert recovered['key', 0] == 'test'
assert recovered['key'] == 'test'
def test_history(self):
'''Test storing in and retrieving from history (sqlite)'''
h = history.History()
h.save_record(agent_id=0, t_step=0, key="test", value="hello")
assert h[0, 0, "test"] == "hello"
def test_subgraph(self):
'''An agent should be able to subgraph the global topology'''
G = nx.Graph()
@ -345,14 +339,53 @@ class TestMain(TestCase):
def test_until(self):
config = {
'name': 'exporter_sim',
'name': 'until_sim',
'network_params': {},
'agent_type': 'CounterModel',
'max_time': 2,
'num_trials': 100,
'num_trials': 50,
'environment_params': {}
}
s = simulation.from_config(config)
runs = list(s.run_simulation(dry_run=True))
over = list(x.now for x in runs if x.now>2)
assert len(runs) == config['num_trials']
assert len(over) == 0
def test_fsm(self):
'''Basic state change'''
class ToggleAgent(agents.FSM):
@agents.default_state
@agents.state
def ping(self):
return self.pong
@agents.state
def pong(self):
return self.ping
a = ToggleAgent(unique_id=1, model=Environment())
assert a.state_id == a.ping.id
a.step()
assert a.state_id == a.pong.id
a.step()
assert a.state_id == a.ping.id
def test_fsm_when(self):
'''Basic state change'''
class ToggleAgent(agents.FSM):
@agents.default_state
@agents.state
def ping(self):
return self.pong, 2
@agents.state
def pong(self):
return self.ping
a = ToggleAgent(unique_id=1, model=Environment())
when = a.step()
assert when == 2
when = a.step()
assert when == Delta(a.interval)

69
tests/test_mesa.py Normal file
View File

@ -0,0 +1,69 @@
'''
Mesa-SOIL integration tests
We have to test that:
- Mesa agents can be used in SOIL
- Simplified soil agents can be used in mesa simulations
- Mesa and soil agents can interact in a simulation
- Mesa visualizations work with SOIL simulations
'''
from mesa import Agent, Model
from mesa.time import RandomActivation
from mesa.space import MultiGrid
class MoneyAgent(Agent):
""" An agent with fixed initial wealth."""
def __init__(self, unique_id, model):
super().__init__(unique_id, model)
self.wealth = 1
def step(self):
self.move()
if self.wealth > 0:
self.give_money()
def give_money(self):
cellmates = self.model.grid.get_cell_list_contents([self.pos])
if len(cellmates) > 1:
other = self.random.choice(cellmates)
other.wealth += 1
self.wealth -= 1
def move(self):
possible_steps = self.model.grid.get_neighborhood(
self.pos,
moore=True,
include_center=False)
new_position = self.random.choice(possible_steps)
self.model.grid.move_agent(self, new_position)
class MoneyModel(Model):
"""A model with some number of agents."""
def __init__(self, N, width, height):
self.num_agents = N
self.grid = MultiGrid(width, height, True)
self.schedule = RandomActivation(self)
# Create agents
for i in range(self.num_agents):
a = MoneyAgent(i, self)
self.schedule.add(a)
# Add the agent to a random grid cell
x = self.random.randrange(self.grid.width)
y = self.random.randrange(self.grid.height)
self.grid.place_agent(a, (x, y))
def step(self):
'''Advance the model by one step.'''
self.schedule.step()
# model = MoneyModel(10)
# for i in range(10):
# model.step()
# agent_wealth = [a.wealth for a in model.schedule.agents]