mirror of
https://github.com/gsi-upm/soil
synced 2025-10-27 13:48:17 +00:00
Compare commits
11 Commits
cd62c23cb9
...
0.30.0rc2
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
a2fb25c160 | ||
|
|
5fcf610108 | ||
|
|
159c9a9077 | ||
|
|
3776c4e5c5 | ||
|
|
880a9f2a1c | ||
|
|
227fdf050e | ||
|
|
5d759d0072 | ||
|
|
77d08fc592 | ||
|
|
0efcd24d90 | ||
|
|
78833a9e08 | ||
|
|
d9947c2c52 |
12
CHANGELOG.md
12
CHANGELOG.md
@@ -3,16 +3,22 @@ All notable changes to this project will be documented in this file.
|
||||
|
||||
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
|
||||
|
||||
## [0.3 UNRELEASED]
|
||||
## [0.30 UNRELEASED]
|
||||
### Added
|
||||
* Simple debugging capabilities, with a custom `pdb.Debugger` subclass that exposes commands to list agents and their status and set breakpoints on states (for FSM agents)
|
||||
* Simple debugging capabilities in `soil.debugging`, with a custom `pdb.Debugger` subclass that exposes commands to list agents and their status and set breakpoints on states (for FSM agents). Try it with `soil --debug <simulation file>`
|
||||
* Ability to run
|
||||
* Ability to
|
||||
* The `soil.exporters` module to export the results of datacollectors (model.datacollector) into files at the end of trials/simulations
|
||||
* A modular set of classes for environments/models. Now the ability to configure the agents through an agent definition and a topology through a network configuration is split into two classes (`soil.agents.BaseEnvironment` for agents, `soil.agents.NetworkEnvironment` to add topology).
|
||||
* FSM agents can now have generators as states. They work similar to normal states, with one caveat. Only `time` values can be yielded, not a state. This is because the state will not change, it will be resumed after the yield, at the appropriate time. The return value *can* be a state, or a `(state, time)` tuple, just like in normal states.
|
||||
### Changed
|
||||
* Configuration schema is very different now. Check `soil.config` for more information. We are also using Pydantic for (de)serialization.
|
||||
* There may be more than one topology/network in the simulation
|
||||
* Agents are split into groups now. Each group may be assigned a given set of agents or an agent distribution, and a network topology to be assigned to.
|
||||
* Ability
|
||||
### Removed
|
||||
* Any `tsih` and `History` integration in the main classes. To record the state of environments/agents, just use a datacollector. In some cases this may be slower or consume more memory than the previous system. However, few cases actually used the full potential of the history, and it came at the cost of unnecessary complexity and worse performance for the majority of cases.
|
||||
|
||||
|
||||
## [0.20.7]
|
||||
### Changed
|
||||
* Creating a `time.When` from another `time.When` does not nest them anymore (it returns the argument)
|
||||
|
||||
@@ -10,19 +10,14 @@ seed: "CompleteSeed!"
|
||||
model_class: Environment
|
||||
model_params:
|
||||
am_i_complete: true
|
||||
topologies:
|
||||
default:
|
||||
params:
|
||||
generator: complete_graph
|
||||
n: 10
|
||||
another_graph:
|
||||
params:
|
||||
generator: complete_graph
|
||||
n: 2
|
||||
topology:
|
||||
params:
|
||||
generator: complete_graph
|
||||
n: 12
|
||||
environment:
|
||||
agents:
|
||||
agent_class: CounterModel
|
||||
topology: default
|
||||
topology: true
|
||||
state:
|
||||
times: 1
|
||||
# In this group we are not specifying any topology
|
||||
@@ -30,25 +25,23 @@ model_params:
|
||||
- name: 'Environment Agent 1'
|
||||
agent_class: BaseAgent
|
||||
group: environment
|
||||
topology: null
|
||||
topology: false
|
||||
hidden: true
|
||||
state:
|
||||
times: 10
|
||||
- agent_class: CounterModel
|
||||
id: 0
|
||||
group: other_counters
|
||||
topology: another_graph
|
||||
group: fixed_counters
|
||||
state:
|
||||
times: 1
|
||||
total: 0
|
||||
- agent_class: CounterModel
|
||||
topology: another_graph
|
||||
group: other_counters
|
||||
group: fixed_counters
|
||||
id: 1
|
||||
distribution:
|
||||
- agent_class: CounterModel
|
||||
weight: 1
|
||||
group: general_counters
|
||||
group: distro_counters
|
||||
state:
|
||||
times: 3
|
||||
- agent_class: AggregatedCounter
|
||||
|
||||
@@ -1,63 +0,0 @@
|
||||
---
|
||||
version: '2'
|
||||
id: simple
|
||||
group: tests
|
||||
dir_path: "/tmp/"
|
||||
num_trials: 3
|
||||
max_steps: 100
|
||||
interval: 1
|
||||
seed: "CompleteSeed!"
|
||||
model_class: "soil.Environment"
|
||||
model_params:
|
||||
topologies:
|
||||
default:
|
||||
params:
|
||||
generator: complete_graph
|
||||
n: 10
|
||||
another_graph:
|
||||
params:
|
||||
generator: complete_graph
|
||||
n: 2
|
||||
agents:
|
||||
# The values here will be used as default values for any agent
|
||||
agent_class: CounterModel
|
||||
topology: default
|
||||
state:
|
||||
times: 1
|
||||
# This specifies a distribution of agents, each with a `weight` or an explicit number of agents
|
||||
distribution:
|
||||
- agent_class: CounterModel
|
||||
weight: 1
|
||||
# This is inherited from the default settings
|
||||
#topology: default
|
||||
state:
|
||||
times: 3
|
||||
- agent_class: AggregatedCounter
|
||||
topology: default
|
||||
weight: 0.2
|
||||
fixed:
|
||||
- name: 'Environment Agent 1'
|
||||
# All the other agents will assigned to the 'default' group
|
||||
group: environment
|
||||
# Do not count this agent towards total limits
|
||||
hidden: true
|
||||
agent_class: soil.BaseAgent
|
||||
topology: null
|
||||
state:
|
||||
times: 10
|
||||
- agent_class: CounterModel
|
||||
topology: another_graph
|
||||
id: 0
|
||||
state:
|
||||
times: 1
|
||||
total: 0
|
||||
- agent_class: CounterModel
|
||||
topology: another_graph
|
||||
id: 1
|
||||
override:
|
||||
# 2 agents that match this filter will be updated to match the state {times: 5}
|
||||
- filter:
|
||||
agent_class: AggregatedCounter
|
||||
n: 2
|
||||
state:
|
||||
times: 5
|
||||
@@ -2,11 +2,12 @@ from networkx import Graph
|
||||
import random
|
||||
import networkx as nx
|
||||
|
||||
|
||||
def mygenerator(n=5, n_edges=5):
|
||||
'''
|
||||
"""
|
||||
Just a simple generator that creates a network with n nodes and
|
||||
n_edges edges. Edges are assigned randomly, only avoiding self loops.
|
||||
'''
|
||||
"""
|
||||
G = nx.Graph()
|
||||
|
||||
for i in range(n):
|
||||
@@ -19,9 +20,3 @@ def mygenerator(n=5, n_edges=5):
|
||||
n_out = random.choice(nodes)
|
||||
G.add_edge(n_in, n_out)
|
||||
return G
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
@@ -2,34 +2,37 @@ from soil.agents import FSM, state, default_state
|
||||
|
||||
|
||||
class Fibonacci(FSM):
|
||||
'''Agent that only executes in t_steps that are Fibonacci numbers'''
|
||||
"""Agent that only executes in t_steps that are Fibonacci numbers"""
|
||||
|
||||
defaults = {
|
||||
'prev': 1
|
||||
}
|
||||
defaults = {"prev": 1}
|
||||
|
||||
@default_state
|
||||
@state
|
||||
def counting(self):
|
||||
self.log('Stopping at {}'.format(self.now))
|
||||
prev, self['prev'] = self['prev'], max([self.now, self['prev']])
|
||||
self.log("Stopping at {}".format(self.now))
|
||||
prev, self["prev"] = self["prev"], max([self.now, self["prev"]])
|
||||
return None, self.env.timeout(prev)
|
||||
|
||||
|
||||
class Odds(FSM):
|
||||
'''Agent that only executes in odd t_steps'''
|
||||
"""Agent that only executes in odd t_steps"""
|
||||
|
||||
@default_state
|
||||
@state
|
||||
def odds(self):
|
||||
self.log('Stopping at {}'.format(self.now))
|
||||
return None, self.env.timeout(1+self.now%2)
|
||||
self.log("Stopping at {}".format(self.now))
|
||||
return None, self.env.timeout(1 + self.now % 2)
|
||||
|
||||
if __name__ == '__main__':
|
||||
import logging
|
||||
logging.basicConfig(level=logging.INFO)
|
||||
|
||||
if __name__ == "__main__":
|
||||
from soil import Simulation
|
||||
s = Simulation(network_agents=[{'ids': [0], 'agent_class': Fibonacci},
|
||||
{'ids': [1], 'agent_class': Odds}],
|
||||
network_params={"generator": "complete_graph", "n": 2},
|
||||
max_time=100,
|
||||
)
|
||||
|
||||
s = Simulation(
|
||||
network_agents=[
|
||||
{"ids": [0], "agent_class": Fibonacci},
|
||||
{"ids": [1], "agent_class": Odds},
|
||||
],
|
||||
network_params={"generator": "complete_graph", "n": 2},
|
||||
max_time=100,
|
||||
)
|
||||
s.run(dry_run=True)
|
||||
|
||||
7
examples/events_and_messages/README.md
Normal file
7
examples/events_and_messages/README.md
Normal file
@@ -0,0 +1,7 @@
|
||||
This example can be run like with command-line options, like this:
|
||||
|
||||
```bash
|
||||
python cars.py --level DEBUG -e summary --csv
|
||||
```
|
||||
|
||||
This will set the `CSV` (save the agent and model data to a CSV) and `summary` (print the a summary of the data to stdout) exporters, and set the log level to DEBUG.
|
||||
205
examples/events_and_messages/cars.py
Normal file
205
examples/events_and_messages/cars.py
Normal file
@@ -0,0 +1,205 @@
|
||||
"""
|
||||
This is an example of a simplified city, where there are Passengers and Drivers that can take those passengers
|
||||
from their location to their desired location.
|
||||
|
||||
An example scenario could play like the following:
|
||||
|
||||
- Drivers start in the `wandering` state, where they wander around the city until they have been assigned a journey
|
||||
- Passenger(1) tells every driver that it wants to request a Journey.
|
||||
- Each driver receives the request.
|
||||
If Driver(2) is interested in providing the Journey, it asks Passenger(1) to confirm that it accepts Driver(2)'s request
|
||||
- When Passenger(1) accepts the request, two things happen:
|
||||
- Passenger(1) changes its state to `driving_home`
|
||||
- Driver(2) starts moving towards the origin of the Journey
|
||||
- Once Driver(2) reaches the origin, it starts moving itself and Passenger(1) to the destination of the Journey
|
||||
- When Driver(2) reaches the destination (carrying Passenger(1) along):
|
||||
- Driver(2) starts wondering again
|
||||
- Passenger(1) dies, and is removed from the simulation
|
||||
- If there are no more passengers available in the simulation, Drivers die
|
||||
"""
|
||||
from __future__ import annotations
|
||||
from soil import *
|
||||
from soil import events
|
||||
from mesa.space import MultiGrid
|
||||
|
||||
|
||||
# More complex scenarios may use more than one type of message between objects.
|
||||
# A common pattern is to use `enum.Enum` to represent state changes in a request.
|
||||
@dataclass
|
||||
class Journey:
|
||||
"""
|
||||
This represents a request for a journey. Passengers and drivers exchange this object.
|
||||
|
||||
A journey may have a driver assigned or not. If the driver has not been assigned, this
|
||||
object is considered a "request for a journey".
|
||||
"""
|
||||
origin: (int, int)
|
||||
destination: (int, int)
|
||||
tip: float
|
||||
|
||||
passenger: Passenger
|
||||
driver: Driver = None
|
||||
|
||||
|
||||
class City(EventedEnvironment):
|
||||
"""
|
||||
An environment with a grid where drivers and passengers will be placed.
|
||||
|
||||
The number of drivers and riders is configurable through its parameters:
|
||||
|
||||
:param str n_cars: The total number of drivers to add
|
||||
:param str n_passengers: The number of passengers in the simulation
|
||||
:param list agents: Specific agents to use in the simulation. It overrides the `n_passengers`
|
||||
and `n_cars` params.
|
||||
:param int height: Height of the internal grid
|
||||
:param int width: Width of the internal grid
|
||||
"""
|
||||
def __init__(self, *args, n_cars=1, n_passengers=10,
|
||||
height=100, width=100, agents=None,
|
||||
model_reporters=None,
|
||||
**kwargs):
|
||||
self.grid = MultiGrid(width=width, height=height, torus=False)
|
||||
if agents is None:
|
||||
agents = []
|
||||
for i in range(n_cars):
|
||||
agents.append({'agent_class': Driver})
|
||||
for i in range(n_passengers):
|
||||
agents.append({'agent_class': Passenger})
|
||||
model_reporters = model_reporters or {'earnings': 'total_earnings', 'n_passengers': 'number_passengers'}
|
||||
print('REPORTERS', model_reporters)
|
||||
super().__init__(*args, agents=agents, model_reporters=model_reporters, **kwargs)
|
||||
for agent in self.agents:
|
||||
self.grid.place_agent(agent, (0, 0))
|
||||
self.grid.move_to_empty(agent)
|
||||
|
||||
@property
|
||||
def total_earnings(self):
|
||||
return sum(d.earnings for d in self.agents(agent_class=Driver))
|
||||
|
||||
@property
|
||||
def number_passengers(self):
|
||||
return self.count_agents(agent_class=Passenger)
|
||||
|
||||
|
||||
class Driver(Evented, FSM):
|
||||
pos = None
|
||||
journey = None
|
||||
earnings = 0
|
||||
|
||||
def on_receive(self, msg, sender):
|
||||
'''This is not a state. It will run (and block) every time check_messages is invoked'''
|
||||
if self.journey is None and isinstance(msg, Journey) and msg.driver is None:
|
||||
msg.driver = self
|
||||
self.journey = msg
|
||||
|
||||
def check_passengers(self):
|
||||
'''If there are no more passengers, stop forever'''
|
||||
c = self.count_agents(agent_class=Passenger)
|
||||
self.info(f"Passengers left {c}")
|
||||
if not c:
|
||||
self.die()
|
||||
|
||||
@default_state
|
||||
@state
|
||||
def wandering(self):
|
||||
'''Move around the city until a journey is accepted'''
|
||||
target = None
|
||||
self.check_passengers()
|
||||
self.journey = None
|
||||
while self.journey is None: # No potential journeys detected (see on_receive)
|
||||
if target is None or not self.move_towards(target):
|
||||
target = self.random.choice(self.model.grid.get_neighborhood(self.pos, moore=False))
|
||||
|
||||
self.check_passengers()
|
||||
self.check_messages() # This will call on_receive behind the scenes, and the agent's status will be updated
|
||||
yield Delta(30) # Wait at least 30 seconds before checking again
|
||||
|
||||
try:
|
||||
# Re-send the journey to the passenger, to confirm that we have been selected
|
||||
self.journey = yield self.journey.passenger.ask(self.journey, timeout=60)
|
||||
except events.TimedOut:
|
||||
# No journey has been accepted. Try again
|
||||
self.journey = None
|
||||
return
|
||||
|
||||
return self.driving
|
||||
|
||||
@state
|
||||
def driving(self):
|
||||
'''The journey has been accepted. Pick them up and take them to their destination'''
|
||||
while self.move_towards(self.journey.origin):
|
||||
yield
|
||||
while self.move_towards(self.journey.destination, with_passenger=True):
|
||||
yield
|
||||
self.earnings += self.journey.tip
|
||||
self.check_passengers()
|
||||
return self.wandering
|
||||
|
||||
def move_towards(self, target, with_passenger=False):
|
||||
'''Move one cell at a time towards a target'''
|
||||
self.info(f"Moving { self.pos } -> { target }")
|
||||
if target[0] == self.pos[0] and target[1] == self.pos[1]:
|
||||
return False
|
||||
|
||||
next_pos = [self.pos[0], self.pos[1]]
|
||||
for idx in [0, 1]:
|
||||
if self.pos[idx] < target[idx]:
|
||||
next_pos[idx] += 1
|
||||
break
|
||||
if self.pos[idx] > target[idx]:
|
||||
next_pos[idx] -= 1
|
||||
break
|
||||
self.model.grid.move_agent(self, tuple(next_pos))
|
||||
if with_passenger:
|
||||
self.journey.passenger.pos = self.pos # This could be communicated through messages
|
||||
return True
|
||||
|
||||
|
||||
class Passenger(Evented, FSM):
|
||||
pos = None
|
||||
|
||||
def on_receive(self, msg, sender):
|
||||
'''This is not a state. It will be run synchronously every time `check_messages` is run'''
|
||||
|
||||
if isinstance(msg, Journey):
|
||||
self.journey = msg
|
||||
return msg
|
||||
|
||||
|
||||
@default_state
|
||||
@state
|
||||
def asking(self):
|
||||
destination = (self.random.randint(0, self.model.grid.height), self.random.randint(0, self.model.grid.width))
|
||||
self.journey = None
|
||||
journey = Journey(origin=self.pos,
|
||||
destination=destination,
|
||||
tip=self.random.randint(10, 100),
|
||||
passenger=self)
|
||||
|
||||
timeout = 60
|
||||
expiration = self.now + timeout
|
||||
self.model.broadcast(journey, ttl=timeout, sender=self, agent_class=Driver)
|
||||
while not self.journey:
|
||||
self.info(f"Passenger at: { self.pos }. Checking for responses.")
|
||||
try:
|
||||
yield self.received(expiration=expiration)
|
||||
except events.TimedOut:
|
||||
self.info(f"Passenger at: { self.pos }. Asking for journey.")
|
||||
self.model.broadcast(journey, ttl=timeout, sender=self, agent_class=Driver)
|
||||
expiration = self.now + timeout
|
||||
self.check_messages()
|
||||
return self.driving_home
|
||||
|
||||
@state
|
||||
def driving_home(self):
|
||||
while self.pos[0] != self.journey.destination[0] or self.pos[1] != self.journey.destination[1]:
|
||||
yield self.received(timeout=60)
|
||||
self.info("Got home safe!")
|
||||
self.die()
|
||||
|
||||
|
||||
simulation = Simulation(name='RideHailing', model_class=City, model_params={'n_passengers': 2})
|
||||
|
||||
if __name__ == "__main__":
|
||||
with easy(simulation) as s:
|
||||
s.run()
|
||||
@@ -8,17 +8,12 @@ interval: 1
|
||||
seed: '1'
|
||||
model_class: social_wealth.MoneyEnv
|
||||
model_params:
|
||||
topologies:
|
||||
default:
|
||||
params:
|
||||
generator: social_wealth.graph_generator
|
||||
n: 5
|
||||
generator: social_wealth.graph_generator
|
||||
agents:
|
||||
topology: true
|
||||
distribution:
|
||||
- agent_class: social_wealth.SocialMoneyAgent
|
||||
topology: default
|
||||
weight: 1
|
||||
mesa_agent_class: social_wealth.MoneyAgent
|
||||
N: 10
|
||||
width: 50
|
||||
height: 50
|
||||
|
||||
@@ -2,6 +2,7 @@ from mesa.visualization.ModularVisualization import ModularServer
|
||||
from soil.visualization import UserSettableParameter
|
||||
from mesa.visualization.modules import ChartModule, NetworkModule, CanvasGrid
|
||||
from social_wealth import MoneyEnv, graph_generator, SocialMoneyAgent
|
||||
import networkx as nx
|
||||
|
||||
|
||||
class MyNetwork(NetworkModule):
|
||||
@@ -13,15 +14,18 @@ def network_portrayal(env):
|
||||
# The model ensures there is 0 or 1 agent per node
|
||||
|
||||
portrayal = dict()
|
||||
wealths = {
|
||||
node_id: data["agent"].wealth for (node_id, data) in env.G.nodes(data=True)
|
||||
}
|
||||
portrayal["nodes"] = [
|
||||
{
|
||||
"id": agent_id,
|
||||
"size": env.get_agent(agent_id).wealth,
|
||||
# "color": "#CC0000" if not agents or agents[0].wealth == 0 else "#007959",
|
||||
"color": "#CC0000",
|
||||
"label": f"{agent_id}: {env.get_agent(agent_id).wealth}",
|
||||
"id": node_id,
|
||||
"size": 2 * (wealth + 1),
|
||||
"color": "#CC0000" if wealth == 0 else "#007959",
|
||||
# "color": "#CC0000",
|
||||
"label": f"{node_id}: {wealth}",
|
||||
}
|
||||
for (agent_id) in env.G.nodes
|
||||
for (node_id, wealth) in wealths.items()
|
||||
]
|
||||
|
||||
portrayal["edges"] = [
|
||||
@@ -29,7 +33,6 @@ def network_portrayal(env):
|
||||
for edge_id, (source, target) in enumerate(env.G.edges)
|
||||
]
|
||||
|
||||
|
||||
return portrayal
|
||||
|
||||
|
||||
@@ -40,7 +43,7 @@ def gridPortrayal(agent):
|
||||
:param agent: the agent in the simulation
|
||||
:return: the portrayal dictionary
|
||||
"""
|
||||
color = max(10, min(agent.wealth*10, 100))
|
||||
color = max(10, min(agent.wealth * 10, 100))
|
||||
return {
|
||||
"Shape": "rect",
|
||||
"w": 1,
|
||||
@@ -51,11 +54,11 @@ def gridPortrayal(agent):
|
||||
"Text": agent.unique_id,
|
||||
"x": agent.pos[0],
|
||||
"y": agent.pos[1],
|
||||
"Color": f"rgba(31, 10, 255, 0.{color})"
|
||||
"Color": f"rgba(31, 10, 255, 0.{color})",
|
||||
}
|
||||
|
||||
|
||||
grid = MyNetwork(network_portrayal, 500, 500, library="sigma")
|
||||
grid = MyNetwork(network_portrayal, 500, 500)
|
||||
chart = ChartModule(
|
||||
[{"Label": "Gini", "Color": "Black"}], data_collector_name="datacollector"
|
||||
)
|
||||
@@ -70,7 +73,6 @@ model_params = {
|
||||
1,
|
||||
description="Choose how many agents to include in the model",
|
||||
),
|
||||
"network_agents": [{"agent_class": SocialMoneyAgent}],
|
||||
"height": UserSettableParameter(
|
||||
"slider",
|
||||
"height",
|
||||
@@ -79,7 +81,7 @@ model_params = {
|
||||
10,
|
||||
1,
|
||||
description="Grid height",
|
||||
),
|
||||
),
|
||||
"width": UserSettableParameter(
|
||||
"slider",
|
||||
"width",
|
||||
@@ -88,13 +90,20 @@ model_params = {
|
||||
10,
|
||||
1,
|
||||
description="Grid width",
|
||||
),
|
||||
"network_params": {
|
||||
'generator': graph_generator
|
||||
},
|
||||
),
|
||||
"agent_class": UserSettableParameter(
|
||||
"choice",
|
||||
"Agent class",
|
||||
value="MoneyAgent",
|
||||
choices=["MoneyAgent", "SocialMoneyAgent"],
|
||||
),
|
||||
"generator": graph_generator,
|
||||
}
|
||||
|
||||
canvas_element = CanvasGrid(gridPortrayal, model_params["width"].value, model_params["height"].value, 500, 500)
|
||||
|
||||
canvas_element = CanvasGrid(
|
||||
gridPortrayal, model_params["width"].value, model_params["height"].value, 500, 500
|
||||
)
|
||||
|
||||
|
||||
server = ModularServer(
|
||||
|
||||
@@ -1,23 +1,26 @@
|
||||
'''
|
||||
"""
|
||||
This is an example that adds soil agents and environment in a normal
|
||||
mesa workflow.
|
||||
'''
|
||||
"""
|
||||
from mesa import Agent as MesaAgent
|
||||
from mesa.space import MultiGrid
|
||||
|
||||
# from mesa.time import RandomActivation
|
||||
from mesa.datacollection import DataCollector
|
||||
from mesa.batchrunner import BatchRunner
|
||||
|
||||
import networkx as nx
|
||||
|
||||
from soil import NetworkAgent, Environment
|
||||
from soil import NetworkAgent, Environment, serialization
|
||||
|
||||
|
||||
def compute_gini(model):
|
||||
agent_wealths = [agent.wealth for agent in model.agents]
|
||||
x = sorted(agent_wealths)
|
||||
N = len(list(model.agents))
|
||||
B = sum( xi * (N-i) for i,xi in enumerate(x) ) / (N*sum(x))
|
||||
return (1 + (1/N) - 2*B)
|
||||
B = sum(xi * (N - i) for i, xi in enumerate(x)) / (N * sum(x))
|
||||
return 1 + (1 / N) - 2 * B
|
||||
|
||||
|
||||
class MoneyAgent(MesaAgent):
|
||||
"""
|
||||
@@ -25,15 +28,14 @@ class MoneyAgent(MesaAgent):
|
||||
It will only share wealth with neighbors based on grid proximity
|
||||
"""
|
||||
|
||||
def __init__(self, unique_id, model):
|
||||
def __init__(self, unique_id, model, wealth=1):
|
||||
super().__init__(unique_id=unique_id, model=model)
|
||||
self.wealth = 1
|
||||
self.wealth = wealth
|
||||
|
||||
def move(self):
|
||||
possible_steps = self.model.grid.get_neighborhood(
|
||||
self.pos,
|
||||
moore=True,
|
||||
include_center=False)
|
||||
self.pos, moore=True, include_center=False
|
||||
)
|
||||
new_position = self.random.choice(possible_steps)
|
||||
self.model.grid.move_agent(self, new_position)
|
||||
|
||||
@@ -45,7 +47,7 @@ class MoneyAgent(MesaAgent):
|
||||
self.wealth -= 1
|
||||
|
||||
def step(self):
|
||||
self.info("Crying wolf", self.pos)
|
||||
print("Crying wolf", self.pos)
|
||||
self.move()
|
||||
if self.wealth > 0:
|
||||
self.give_money()
|
||||
@@ -56,10 +58,10 @@ class SocialMoneyAgent(NetworkAgent, MoneyAgent):
|
||||
|
||||
def give_money(self):
|
||||
cellmates = set(self.model.grid.get_cell_list_contents([self.pos]))
|
||||
friends = set(self.get_neighboring_agents())
|
||||
friends = set(self.get_neighbors())
|
||||
self.info("Trying to give money")
|
||||
self.debug("Cellmates: ", cellmates)
|
||||
self.debug("Friends: ", friends)
|
||||
self.info("Cellmates: ", cellmates)
|
||||
self.info("Friends: ", friends)
|
||||
|
||||
nearby_friends = list(cellmates & friends)
|
||||
|
||||
@@ -69,13 +71,35 @@ class SocialMoneyAgent(NetworkAgent, MoneyAgent):
|
||||
self.wealth -= 1
|
||||
|
||||
|
||||
def graph_generator(n=5):
|
||||
G = nx.Graph()
|
||||
for ix in range(n):
|
||||
G.add_edge(0, ix)
|
||||
return G
|
||||
|
||||
|
||||
class MoneyEnv(Environment):
|
||||
"""A model with some number of agents."""
|
||||
def __init__(self, width, height, *args, topologies, **kwargs):
|
||||
|
||||
super().__init__(*args, topologies=topologies, **kwargs)
|
||||
def __init__(
|
||||
self,
|
||||
width,
|
||||
height,
|
||||
N,
|
||||
generator=graph_generator,
|
||||
agent_class=SocialMoneyAgent,
|
||||
topology=None,
|
||||
**kwargs
|
||||
):
|
||||
|
||||
generator = serialization.deserialize(generator)
|
||||
agent_class = serialization.deserialize(agent_class, globs=globals())
|
||||
topology = generator(n=N)
|
||||
super().__init__(topology=topology, N=N, **kwargs)
|
||||
self.grid = MultiGrid(width, height, False)
|
||||
|
||||
self.populate_network(agent_class=agent_class)
|
||||
|
||||
# Create agents
|
||||
for agent in self.agents:
|
||||
x = self.random.randrange(self.grid.width)
|
||||
@@ -83,37 +107,31 @@ class MoneyEnv(Environment):
|
||||
self.grid.place_agent(agent, (x, y))
|
||||
|
||||
self.datacollector = DataCollector(
|
||||
model_reporters={"Gini": compute_gini},
|
||||
agent_reporters={"Wealth": "wealth"})
|
||||
model_reporters={"Gini": compute_gini}, agent_reporters={"Wealth": "wealth"}
|
||||
)
|
||||
|
||||
|
||||
def graph_generator(n=5):
|
||||
G = nx.Graph()
|
||||
for ix in range(n):
|
||||
G.add_edge(0, ix)
|
||||
return G
|
||||
if __name__ == "__main__":
|
||||
|
||||
if __name__ == '__main__':
|
||||
|
||||
|
||||
G = graph_generator()
|
||||
fixed_params = {"topology": G,
|
||||
"width": 10,
|
||||
"network_agents": [{"agent_class": SocialMoneyAgent,
|
||||
'weight': 1}],
|
||||
"height": 10}
|
||||
fixed_params = {
|
||||
"generator": nx.complete_graph,
|
||||
"width": 10,
|
||||
"network_agents": [{"agent_class": SocialMoneyAgent, "weight": 1}],
|
||||
"height": 10,
|
||||
}
|
||||
|
||||
variable_params = {"N": range(10, 100, 10)}
|
||||
|
||||
batch_run = BatchRunner(MoneyEnv,
|
||||
variable_parameters=variable_params,
|
||||
fixed_parameters=fixed_params,
|
||||
iterations=5,
|
||||
max_steps=100,
|
||||
model_reporters={"Gini": compute_gini})
|
||||
batch_run = BatchRunner(
|
||||
MoneyEnv,
|
||||
variable_parameters=variable_params,
|
||||
fixed_parameters=fixed_params,
|
||||
iterations=5,
|
||||
max_steps=100,
|
||||
model_reporters={"Gini": compute_gini},
|
||||
)
|
||||
batch_run.run_all()
|
||||
|
||||
run_data = batch_run.get_model_vars_dataframe()
|
||||
run_data.head()
|
||||
print(run_data.Gini)
|
||||
|
||||
|
||||
@@ -4,24 +4,26 @@ from mesa.time import RandomActivation
|
||||
from mesa.datacollection import DataCollector
|
||||
from mesa.batchrunner import BatchRunner
|
||||
|
||||
|
||||
def compute_gini(model):
|
||||
agent_wealths = [agent.wealth for agent in model.schedule.agents]
|
||||
x = sorted(agent_wealths)
|
||||
N = model.num_agents
|
||||
B = sum( xi * (N-i) for i,xi in enumerate(x) ) / (N*sum(x))
|
||||
return (1 + (1/N) - 2*B)
|
||||
B = sum(xi * (N - i) for i, xi in enumerate(x)) / (N * sum(x))
|
||||
return 1 + (1 / N) - 2 * B
|
||||
|
||||
|
||||
class MoneyAgent(Agent):
|
||||
""" An agent with fixed initial wealth."""
|
||||
"""An agent with fixed initial wealth."""
|
||||
|
||||
def __init__(self, unique_id, model):
|
||||
super().__init__(unique_id, model)
|
||||
self.wealth = 1
|
||||
|
||||
def move(self):
|
||||
possible_steps = self.model.grid.get_neighborhood(
|
||||
self.pos,
|
||||
moore=True,
|
||||
include_center=False)
|
||||
self.pos, moore=True, include_center=False
|
||||
)
|
||||
new_position = self.random.choice(possible_steps)
|
||||
self.model.grid.move_agent(self, new_position)
|
||||
|
||||
@@ -37,8 +39,10 @@ class MoneyAgent(Agent):
|
||||
if self.wealth > 0:
|
||||
self.give_money()
|
||||
|
||||
|
||||
class MoneyModel(Model):
|
||||
"""A model with some number of agents."""
|
||||
|
||||
def __init__(self, N, width, height):
|
||||
self.num_agents = N
|
||||
self.grid = MultiGrid(width, height, True)
|
||||
@@ -55,29 +59,29 @@ class MoneyModel(Model):
|
||||
self.grid.place_agent(a, (x, y))
|
||||
|
||||
self.datacollector = DataCollector(
|
||||
model_reporters={"Gini": compute_gini},
|
||||
agent_reporters={"Wealth": "wealth"})
|
||||
model_reporters={"Gini": compute_gini}, agent_reporters={"Wealth": "wealth"}
|
||||
)
|
||||
|
||||
def step(self):
|
||||
self.datacollector.collect(self)
|
||||
self.schedule.step()
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
if __name__ == "__main__":
|
||||
|
||||
fixed_params = {"width": 10,
|
||||
"height": 10}
|
||||
fixed_params = {"width": 10, "height": 10}
|
||||
variable_params = {"N": range(10, 500, 10)}
|
||||
|
||||
batch_run = BatchRunner(MoneyModel,
|
||||
variable_params,
|
||||
fixed_params,
|
||||
iterations=5,
|
||||
max_steps=100,
|
||||
model_reporters={"Gini": compute_gini})
|
||||
batch_run = BatchRunner(
|
||||
MoneyModel,
|
||||
variable_params,
|
||||
fixed_params,
|
||||
iterations=5,
|
||||
max_steps=100,
|
||||
model_reporters={"Gini": compute_gini},
|
||||
)
|
||||
batch_run.run_all()
|
||||
|
||||
run_data = batch_run.get_model_vars_dataframe()
|
||||
run_data.head()
|
||||
print(run_data.Gini)
|
||||
|
||||
|
||||
@@ -3,84 +3,85 @@ import logging
|
||||
|
||||
|
||||
class DumbViewer(FSM, NetworkAgent):
|
||||
'''
|
||||
"""
|
||||
A viewer that gets infected via TV (if it has one) and tries to infect
|
||||
its neighbors once it's infected.
|
||||
'''
|
||||
defaults = {
|
||||
'prob_neighbor_spread': 0.5,
|
||||
'prob_tv_spread': 0.1,
|
||||
}
|
||||
"""
|
||||
|
||||
prob_neighbor_spread = 0.5
|
||||
prob_tv_spread = 0.1
|
||||
has_been_infected = False
|
||||
|
||||
@default_state
|
||||
@state
|
||||
def neutral(self):
|
||||
if self['has_tv']:
|
||||
if self.prob(self.model['prob_tv_spread']):
|
||||
if self["has_tv"]:
|
||||
if self.prob(self.model["prob_tv_spread"]):
|
||||
return self.infected
|
||||
if self.has_been_infected:
|
||||
return self.infected
|
||||
|
||||
@state
|
||||
def infected(self):
|
||||
for neighbor in self.get_neighboring_agents(state_id=self.neutral.id):
|
||||
if self.prob(self.model['prob_neighbor_spread']):
|
||||
for neighbor in self.get_neighbors(state_id=self.neutral.id):
|
||||
if self.prob(self.model["prob_neighbor_spread"]):
|
||||
neighbor.infect()
|
||||
|
||||
def infect(self):
|
||||
'''
|
||||
"""
|
||||
This is not a state. It is a function that other agents can use to try to
|
||||
infect this agent. DumbViewer always gets infected, but other agents like
|
||||
HerdViewer might not become infected right away
|
||||
'''
|
||||
"""
|
||||
|
||||
self.set_state(self.infected)
|
||||
self.has_been_infected = True
|
||||
|
||||
|
||||
class HerdViewer(DumbViewer):
|
||||
'''
|
||||
"""
|
||||
A viewer whose probability of infection depends on the state of its neighbors.
|
||||
'''
|
||||
"""
|
||||
|
||||
def infect(self):
|
||||
'''Notice again that this is NOT a state. See DumbViewer.infect for reference'''
|
||||
infected = self.count_neighboring_agents(state_id=self.infected.id)
|
||||
total = self.count_neighboring_agents()
|
||||
prob_infect = self.model['prob_neighbor_spread'] * infected/total
|
||||
self.debug('prob_infect', prob_infect)
|
||||
"""Notice again that this is NOT a state. See DumbViewer.infect for reference"""
|
||||
infected = self.count_neighbors(state_id=self.infected.id)
|
||||
total = self.count_neighbors()
|
||||
prob_infect = self.model["prob_neighbor_spread"] * infected / total
|
||||
self.debug("prob_infect", prob_infect)
|
||||
if self.prob(prob_infect):
|
||||
self.set_state(self.infected)
|
||||
self.has_been_infected = True
|
||||
|
||||
|
||||
class WiseViewer(HerdViewer):
|
||||
'''
|
||||
"""
|
||||
A viewer that can change its mind.
|
||||
'''
|
||||
"""
|
||||
|
||||
defaults = {
|
||||
'prob_neighbor_spread': 0.5,
|
||||
'prob_neighbor_cure': 0.25,
|
||||
'prob_tv_spread': 0.1,
|
||||
"prob_neighbor_spread": 0.5,
|
||||
"prob_neighbor_cure": 0.25,
|
||||
"prob_tv_spread": 0.1,
|
||||
}
|
||||
|
||||
@state
|
||||
def cured(self):
|
||||
prob_cure = self.model['prob_neighbor_cure']
|
||||
for neighbor in self.get_neighboring_agents(state_id=self.infected.id):
|
||||
prob_cure = self.model["prob_neighbor_cure"]
|
||||
for neighbor in self.get_neighbors(state_id=self.infected.id):
|
||||
if self.prob(prob_cure):
|
||||
try:
|
||||
neighbor.cure()
|
||||
except AttributeError:
|
||||
self.debug('Viewer {} cannot be cured'.format(neighbor.id))
|
||||
self.debug("Viewer {} cannot be cured".format(neighbor.id))
|
||||
|
||||
def cure(self):
|
||||
self.set_state(self.cured.id)
|
||||
self.has_been_cured = True
|
||||
|
||||
@state
|
||||
def infected(self):
|
||||
cured = max(self.count_neighboring_agents(self.cured.id),
|
||||
1.0)
|
||||
infected = max(self.count_neighboring_agents(self.infected.id),
|
||||
1.0)
|
||||
prob_cure = self.model['prob_neighbor_cure'] * (cured/infected)
|
||||
if self.has_been_cured:
|
||||
return self.cured
|
||||
cured = max(self.count_neighbors(self.cured.id), 1.0)
|
||||
infected = max(self.count_neighbors(self.infected.id), 1.0)
|
||||
prob_cure = self.model["prob_neighbor_cure"] * (cured / infected)
|
||||
if self.prob(prob_cure):
|
||||
return self.cured
|
||||
return self.set_state(super().infected)
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
'''
|
||||
"""
|
||||
Example of a fully programmatic simulation, without definition files.
|
||||
'''
|
||||
"""
|
||||
from soil import Simulation, agents
|
||||
from networkx import Graph
|
||||
import logging
|
||||
@@ -14,21 +14,22 @@ def mygenerator():
|
||||
|
||||
|
||||
class MyAgent(agents.FSM):
|
||||
|
||||
@agents.default_state
|
||||
@agents.state
|
||||
def neutral(self):
|
||||
self.debug('I am running')
|
||||
self.debug("I am running")
|
||||
if agents.prob(0.2):
|
||||
self.info('This runs 2/10 times on average')
|
||||
self.info("This runs 2/10 times on average")
|
||||
|
||||
|
||||
s = Simulation(name='Programmatic',
|
||||
network_params={'generator': mygenerator},
|
||||
num_trials=1,
|
||||
max_time=100,
|
||||
agent_class=MyAgent,
|
||||
dry_run=True)
|
||||
s = Simulation(
|
||||
name="Programmatic",
|
||||
network_params={"generator": mygenerator},
|
||||
num_trials=1,
|
||||
max_time=100,
|
||||
agent_class=MyAgent,
|
||||
dry_run=True,
|
||||
)
|
||||
|
||||
|
||||
# By default, logging will only print WARNING logs (and above).
|
||||
|
||||
@@ -5,7 +5,8 @@ import logging
|
||||
|
||||
|
||||
class CityPubs(Environment):
|
||||
'''Environment with Pubs'''
|
||||
"""Environment with Pubs"""
|
||||
|
||||
level = logging.INFO
|
||||
|
||||
def __init__(self, *args, number_of_pubs=3, pub_capacity=10, **kwargs):
|
||||
@@ -13,68 +14,70 @@ class CityPubs(Environment):
|
||||
pubs = {}
|
||||
for i in range(number_of_pubs):
|
||||
newpub = {
|
||||
'name': 'The awesome pub #{}'.format(i),
|
||||
'open': True,
|
||||
'capacity': pub_capacity,
|
||||
'occupancy': 0,
|
||||
"name": "The awesome pub #{}".format(i),
|
||||
"open": True,
|
||||
"capacity": pub_capacity,
|
||||
"occupancy": 0,
|
||||
}
|
||||
pubs[newpub['name']] = newpub
|
||||
self['pubs'] = pubs
|
||||
pubs[newpub["name"]] = newpub
|
||||
self["pubs"] = pubs
|
||||
|
||||
def enter(self, pub_id, *nodes):
|
||||
'''Agents will try to enter. The pub checks if it is possible'''
|
||||
"""Agents will try to enter. The pub checks if it is possible"""
|
||||
try:
|
||||
pub = self['pubs'][pub_id]
|
||||
pub = self["pubs"][pub_id]
|
||||
except KeyError:
|
||||
raise ValueError('Pub {} is not available'.format(pub_id))
|
||||
if not pub['open'] or (pub['capacity'] < (len(nodes) + pub['occupancy'])):
|
||||
raise ValueError("Pub {} is not available".format(pub_id))
|
||||
if not pub["open"] or (pub["capacity"] < (len(nodes) + pub["occupancy"])):
|
||||
return False
|
||||
pub['occupancy'] += len(nodes)
|
||||
pub["occupancy"] += len(nodes)
|
||||
for node in nodes:
|
||||
node['pub'] = pub_id
|
||||
node["pub"] = pub_id
|
||||
return True
|
||||
|
||||
def available_pubs(self):
|
||||
for pub in self['pubs'].values():
|
||||
if pub['open'] and (pub['occupancy'] < pub['capacity']):
|
||||
yield pub['name']
|
||||
for pub in self["pubs"].values():
|
||||
if pub["open"] and (pub["occupancy"] < pub["capacity"]):
|
||||
yield pub["name"]
|
||||
|
||||
def exit(self, pub_id, *node_ids):
|
||||
'''Agents will notify the pub they want to leave'''
|
||||
"""Agents will notify the pub they want to leave"""
|
||||
try:
|
||||
pub = self['pubs'][pub_id]
|
||||
pub = self["pubs"][pub_id]
|
||||
except KeyError:
|
||||
raise ValueError('Pub {} is not available'.format(pub_id))
|
||||
raise ValueError("Pub {} is not available".format(pub_id))
|
||||
for node_id in node_ids:
|
||||
node = self.get_agent(node_id)
|
||||
if pub_id == node['pub']:
|
||||
del node['pub']
|
||||
pub['occupancy'] -= 1
|
||||
if pub_id == node["pub"]:
|
||||
del node["pub"]
|
||||
pub["occupancy"] -= 1
|
||||
|
||||
|
||||
class Patron(FSM, NetworkAgent):
|
||||
'''Agent that looks for friends to drink with. It will do three things:
|
||||
1) Look for other patrons to drink with
|
||||
2) Look for a bar where the agent and other agents in the same group can get in.
|
||||
3) While in the bar, patrons only drink, until they get drunk and taken home.
|
||||
'''
|
||||
"""Agent that looks for friends to drink with. It will do three things:
|
||||
1) Look for other patrons to drink with
|
||||
2) Look for a bar where the agent and other agents in the same group can get in.
|
||||
3) While in the bar, patrons only drink, until they get drunk and taken home.
|
||||
"""
|
||||
|
||||
level = logging.DEBUG
|
||||
|
||||
pub = None
|
||||
drunk = False
|
||||
pints = 0
|
||||
max_pints = 3
|
||||
kicked_out = False
|
||||
|
||||
@default_state
|
||||
@state
|
||||
def looking_for_friends(self):
|
||||
'''Look for friends to drink with'''
|
||||
self.info('I am looking for friends')
|
||||
available_friends = list(self.get_agents(drunk=False,
|
||||
pub=None,
|
||||
state_id=self.looking_for_friends.id))
|
||||
"""Look for friends to drink with"""
|
||||
self.info("I am looking for friends")
|
||||
available_friends = list(
|
||||
self.get_agents(drunk=False, pub=None, state_id=self.looking_for_friends.id)
|
||||
)
|
||||
if not available_friends:
|
||||
self.info('Life sucks and I\'m alone!')
|
||||
self.info("Life sucks and I'm alone!")
|
||||
return self.at_home
|
||||
befriended = self.try_friends(available_friends)
|
||||
if befriended:
|
||||
@@ -82,91 +85,91 @@ class Patron(FSM, NetworkAgent):
|
||||
|
||||
@state
|
||||
def looking_for_pub(self):
|
||||
'''Look for a pub that accepts me and my friends'''
|
||||
if self['pub'] != None:
|
||||
"""Look for a pub that accepts me and my friends"""
|
||||
if self["pub"] != None:
|
||||
return self.sober_in_pub
|
||||
self.debug('I am looking for a pub')
|
||||
group = list(self.get_neighboring_agents())
|
||||
self.debug("I am looking for a pub")
|
||||
group = list(self.get_neighbors())
|
||||
for pub in self.model.available_pubs():
|
||||
self.debug('We\'re trying to get into {}: total: {}'.format(pub, len(group)))
|
||||
self.debug("We're trying to get into {}: total: {}".format(pub, len(group)))
|
||||
if self.model.enter(pub, self, *group):
|
||||
self.info('We\'re all {} getting in {}!'.format(len(group), pub))
|
||||
self.info("We're all {} getting in {}!".format(len(group), pub))
|
||||
return self.sober_in_pub
|
||||
|
||||
@state
|
||||
def sober_in_pub(self):
|
||||
'''Drink up.'''
|
||||
"""Drink up."""
|
||||
self.drink()
|
||||
if self['pints'] > self['max_pints']:
|
||||
if self["pints"] > self["max_pints"]:
|
||||
return self.drunk_in_pub
|
||||
|
||||
@state
|
||||
def drunk_in_pub(self):
|
||||
'''I'm out. Take me home!'''
|
||||
self.info('I\'m so drunk. Take me home!')
|
||||
self['drunk'] = True
|
||||
pass # out drunk
|
||||
"""I'm out. Take me home!"""
|
||||
self.info("I'm so drunk. Take me home!")
|
||||
self["drunk"] = True
|
||||
if self.kicked_out:
|
||||
return self.at_home
|
||||
pass # out drun
|
||||
|
||||
@state
|
||||
def at_home(self):
|
||||
'''The end'''
|
||||
"""The end"""
|
||||
others = self.get_agents(state_id=Patron.at_home.id, limit_neighbors=True)
|
||||
self.debug('I\'m home. Just like {} of my friends'.format(len(others)))
|
||||
self.debug("I'm home. Just like {} of my friends".format(len(others)))
|
||||
|
||||
def drink(self):
|
||||
self['pints'] += 1
|
||||
self.debug('Cheers to that')
|
||||
self["pints"] += 1
|
||||
self.debug("Cheers to that")
|
||||
|
||||
def kick_out(self):
|
||||
self.set_state(self.at_home)
|
||||
self.kicked_out = True
|
||||
|
||||
def befriend(self, other_agent, force=False):
|
||||
'''
|
||||
"""
|
||||
Try to become friends with another agent. The chances of
|
||||
success depend on both agents' openness.
|
||||
'''
|
||||
if force or self['openness'] > self.random.random():
|
||||
self.model.add_edge(self, other_agent)
|
||||
self.info('Made some friend {}'.format(other_agent))
|
||||
"""
|
||||
if force or self["openness"] > self.random.random():
|
||||
self.add_edge(self, other_agent)
|
||||
self.info("Made some friend {}".format(other_agent))
|
||||
return True
|
||||
return False
|
||||
|
||||
def try_friends(self, others):
|
||||
''' Look for random agents around me and try to befriend them'''
|
||||
"""Look for random agents around me and try to befriend them"""
|
||||
befriended = False
|
||||
k = int(10*self['openness'])
|
||||
k = int(10 * self["openness"])
|
||||
self.random.shuffle(others)
|
||||
for friend in islice(others, k): # random.choice >= 3.7
|
||||
if friend == self:
|
||||
continue
|
||||
if friend.befriend(self):
|
||||
self.befriend(friend, force=True)
|
||||
self.debug('Hooray! new friend: {}'.format(friend.id))
|
||||
self.debug("Hooray! new friend: {}".format(friend.id))
|
||||
befriended = True
|
||||
else:
|
||||
self.debug('{} does not want to be friends'.format(friend.id))
|
||||
self.debug("{} does not want to be friends".format(friend.id))
|
||||
return befriended
|
||||
|
||||
|
||||
class Police(FSM):
|
||||
'''Simple agent to take drunk people out of pubs.'''
|
||||
"""Simple agent to take drunk people out of pubs."""
|
||||
|
||||
level = logging.INFO
|
||||
|
||||
@default_state
|
||||
@state
|
||||
def patrol(self):
|
||||
drunksters = list(self.get_agents(drunk=True,
|
||||
state_id=Patron.drunk_in_pub.id))
|
||||
drunksters = list(self.get_agents(drunk=True, state_id=Patron.drunk_in_pub.id))
|
||||
for drunk in drunksters:
|
||||
self.info('Kicking out the trash: {}'.format(drunk.id))
|
||||
self.info("Kicking out the trash: {}".format(drunk.id))
|
||||
drunk.kick_out()
|
||||
else:
|
||||
self.info('No trash to take out. Too bad.')
|
||||
self.info("No trash to take out. Too bad.")
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
if __name__ == "__main__":
|
||||
from soil import simulation
|
||||
simulation.run_from_config('pubcrawl.yml',
|
||||
dry_run=True,
|
||||
dump=None,
|
||||
parallel=False)
|
||||
|
||||
simulation.run_from_config("pubcrawl.yml", dry_run=True, dump=None, parallel=False)
|
||||
|
||||
@@ -2,3 +2,13 @@ There are two similar implementations of this simulation.
|
||||
|
||||
- `basic`. Using simple primites
|
||||
- `improved`. Using more advanced features such as the `time` module to avoid unnecessary computations (i.e., skip steps), and generator functions.
|
||||
|
||||
The examples can be run directly in the terminal, and they accept command like arguments.
|
||||
For example, to enable the CSV exporter and the Summary exporter, while setting `max_time` to `100` and `seed` to `CustomSeed`:
|
||||
|
||||
```
|
||||
python rabbit_agents.py --set max_time=100 --csv -e summary --set 'seed="CustomSeed"'
|
||||
```
|
||||
|
||||
To learn more about how this functionality works, check out the `soil.easy` function.
|
||||
|
||||
|
||||
@@ -1,12 +1,24 @@
|
||||
from soil.agents import FSM, state, default_state, BaseAgent, NetworkAgent
|
||||
from soil.time import Delta
|
||||
from enum import Enum
|
||||
from soil import FSM, state, default_state, BaseAgent, NetworkAgent, Environment
|
||||
from collections import Counter
|
||||
import logging
|
||||
import math
|
||||
|
||||
|
||||
class RabbitModel(FSM, NetworkAgent):
|
||||
class RabbitEnv(Environment):
|
||||
@property
|
||||
def num_rabbits(self):
|
||||
return self.count_agents(agent_class=Rabbit)
|
||||
|
||||
@property
|
||||
def num_males(self):
|
||||
return self.count_agents(agent_class=Male)
|
||||
|
||||
@property
|
||||
def num_females(self):
|
||||
return self.count_agents(agent_class=Female)
|
||||
|
||||
|
||||
class Rabbit(NetworkAgent, FSM):
|
||||
|
||||
sexual_maturity = 30
|
||||
life_expectancy = 300
|
||||
@@ -14,7 +26,7 @@ class RabbitModel(FSM, NetworkAgent):
|
||||
@default_state
|
||||
@state
|
||||
def newborn(self):
|
||||
self.info('I am a newborn.')
|
||||
self.info("I am a newborn.")
|
||||
self.age = 0
|
||||
self.offspring = 0
|
||||
return self.youngling
|
||||
@@ -23,7 +35,7 @@ class RabbitModel(FSM, NetworkAgent):
|
||||
def youngling(self):
|
||||
self.age += 1
|
||||
if self.age >= self.sexual_maturity:
|
||||
self.info(f'I am fertile! My age is {self.age}')
|
||||
self.info(f"I am fertile! My age is {self.age}")
|
||||
return self.fertile
|
||||
|
||||
@state
|
||||
@@ -35,7 +47,7 @@ class RabbitModel(FSM, NetworkAgent):
|
||||
self.die()
|
||||
|
||||
|
||||
class Male(RabbitModel):
|
||||
class Male(Rabbit):
|
||||
max_females = 5
|
||||
mating_prob = 0.001
|
||||
|
||||
@@ -47,17 +59,18 @@ class Male(RabbitModel):
|
||||
return self.dead
|
||||
|
||||
# Males try to mate
|
||||
for f in self.model.agents(agent_class=Female,
|
||||
state_id=Female.fertile.id,
|
||||
limit=self.max_females):
|
||||
self.debug('FOUND A FEMALE: ', repr(f), self.mating_prob)
|
||||
if self.prob(self['mating_prob']):
|
||||
for f in self.model.agents(
|
||||
agent_class=Female, state_id=Female.fertile.id, limit=self.max_females
|
||||
):
|
||||
self.debug("FOUND A FEMALE: ", repr(f), self.mating_prob)
|
||||
if self.prob(self["mating_prob"]):
|
||||
f.impregnate(self)
|
||||
break # Take a break
|
||||
|
||||
|
||||
class Female(RabbitModel):
|
||||
gestation = 100
|
||||
class Female(Rabbit):
|
||||
gestation = 10
|
||||
pregnancy = -1
|
||||
|
||||
@state
|
||||
def fertile(self):
|
||||
@@ -65,66 +78,73 @@ class Female(RabbitModel):
|
||||
self.age += 1
|
||||
if self.age > self.life_expectancy:
|
||||
return self.dead
|
||||
if self.pregnancy >= 0:
|
||||
return self.pregnant
|
||||
|
||||
def impregnate(self, male):
|
||||
self.info(f'{repr(male)} impregnating female {repr(self)}')
|
||||
self.info(f"impregnated by {repr(male)}")
|
||||
self.mate = male
|
||||
self.pregnancy = -1
|
||||
self.set_state(self.pregnant, when=self.now)
|
||||
self.number_of_babies = int(8+4*self.random.random())
|
||||
self.debug('I am pregnant')
|
||||
self.pregnancy = 0
|
||||
self.number_of_babies = int(8 + 4 * self.random.random())
|
||||
|
||||
@state
|
||||
def pregnant(self):
|
||||
self.info("I am pregnant")
|
||||
self.age += 1
|
||||
self.pregnancy += 1
|
||||
|
||||
if self.prob(self.age / self.life_expectancy):
|
||||
if self.age >= self.life_expectancy:
|
||||
return self.die()
|
||||
|
||||
if self.pregnancy >= self.gestation:
|
||||
self.info('Having {} babies'.format(self.number_of_babies))
|
||||
for i in range(self.number_of_babies):
|
||||
state = {}
|
||||
agent_class = self.random.choice([Male, Female])
|
||||
child = self.model.add_node(agent_class=agent_class,
|
||||
topology=self.topology,
|
||||
**state)
|
||||
child.add_edge(self)
|
||||
try:
|
||||
child.add_edge(self.mate)
|
||||
self.model.agents[self.mate].offspring += 1
|
||||
except ValueError:
|
||||
self.debug('The father has passed away')
|
||||
if self.pregnancy < self.gestation:
|
||||
self.pregnancy += 1
|
||||
return
|
||||
|
||||
self.offspring += 1
|
||||
self.mate = None
|
||||
return self.fertile
|
||||
self.info("Having {} babies".format(self.number_of_babies))
|
||||
for i in range(self.number_of_babies):
|
||||
state = {}
|
||||
agent_class = self.random.choice([Male, Female])
|
||||
child = self.model.add_node(agent_class=agent_class, **state)
|
||||
child.add_edge(self)
|
||||
try:
|
||||
child.add_edge(self.mate)
|
||||
self.model.agents[self.mate].offspring += 1
|
||||
except ValueError:
|
||||
self.debug("The father has passed away")
|
||||
|
||||
@state
|
||||
def dead(self):
|
||||
super().dead()
|
||||
if 'pregnancy' in self and self['pregnancy'] > -1:
|
||||
self.info('A mother has died carrying a baby!!')
|
||||
self.offspring += 1
|
||||
self.mate = None
|
||||
self.pregnancy = -1
|
||||
return self.fertile
|
||||
|
||||
def die(self):
|
||||
if "pregnancy" in self and self["pregnancy"] > -1:
|
||||
self.info("A mother has died carrying a baby!!")
|
||||
return super().die()
|
||||
|
||||
|
||||
class RandomAccident(BaseAgent):
|
||||
|
||||
level = logging.INFO
|
||||
|
||||
def step(self):
|
||||
rabbits_alive = self.model.topology.number_of_nodes()
|
||||
rabbits_alive = self.model.G.number_of_nodes()
|
||||
|
||||
if not rabbits_alive:
|
||||
return self.die()
|
||||
|
||||
prob_death = self.model.get('prob_death', 1e-100)*math.floor(math.log10(max(1, rabbits_alive)))
|
||||
self.debug('Killing some rabbits with prob={}!'.format(prob_death))
|
||||
for i in self.iter_agents(agent_class=RabbitModel):
|
||||
if i.state.id == i.dead.id:
|
||||
prob_death = self.model.get("prob_death", 1e-100) * math.floor(
|
||||
math.log10(max(1, rabbits_alive))
|
||||
)
|
||||
self.debug("Killing some rabbits with prob={}!".format(prob_death))
|
||||
for i in self.iter_agents(agent_class=Rabbit):
|
||||
if i.state_id == i.dead.id:
|
||||
continue
|
||||
if self.prob(prob_death):
|
||||
self.info('I killed a rabbit: {}'.format(i.id))
|
||||
self.info("I killed a rabbit: {}".format(i.id))
|
||||
rabbits_alive -= 1
|
||||
i.set_state(i.dead)
|
||||
self.debug('Rabbits alive: {}'.format(rabbits_alive))
|
||||
i.die()
|
||||
self.debug("Rabbits alive: {}".format(rabbits_alive))
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
from soil import easy
|
||||
|
||||
with easy("rabbits.yml") as sim:
|
||||
sim.run()
|
||||
|
||||
@@ -7,21 +7,18 @@ description: null
|
||||
group: null
|
||||
interval: 1.0
|
||||
max_time: 100
|
||||
model_class: soil.environment.Environment
|
||||
model_class: rabbit_agents.RabbitEnv
|
||||
model_params:
|
||||
agents:
|
||||
topology: default
|
||||
agent_class: rabbit_agents.RabbitModel
|
||||
topology: true
|
||||
distribution:
|
||||
- agent_class: rabbit_agents.Male
|
||||
topology: default
|
||||
weight: 1
|
||||
- agent_class: rabbit_agents.Female
|
||||
topology: default
|
||||
weight: 1
|
||||
fixed:
|
||||
- agent_class: rabbit_agents.RandomAccident
|
||||
topology: null
|
||||
topology: false
|
||||
hidden: true
|
||||
state:
|
||||
group: environment
|
||||
@@ -29,13 +26,17 @@ model_params:
|
||||
group: network
|
||||
mating_prob: 0.1
|
||||
prob_death: 0.001
|
||||
topologies:
|
||||
default:
|
||||
topology:
|
||||
directed: true
|
||||
links: []
|
||||
nodes:
|
||||
- id: 1
|
||||
- id: 0
|
||||
topology:
|
||||
fixed:
|
||||
directed: true
|
||||
links: []
|
||||
nodes:
|
||||
- id: 1
|
||||
- id: 0
|
||||
model_reporters:
|
||||
num_males: 'num_males'
|
||||
num_females: 'num_females'
|
||||
num_rabbits: |
|
||||
py:lambda env: env.num_males + env.num_females
|
||||
extra:
|
||||
visualization_params: {}
|
||||
|
||||
@@ -1,130 +1,157 @@
|
||||
from soil.agents import FSM, state, default_state, BaseAgent, NetworkAgent
|
||||
from soil.time import Delta, When, NEVER
|
||||
from soil import FSM, state, default_state, BaseAgent, NetworkAgent, Environment
|
||||
from soil.time import Delta
|
||||
from enum import Enum
|
||||
from collections import Counter
|
||||
import logging
|
||||
import math
|
||||
|
||||
|
||||
class RabbitModel(FSM, NetworkAgent):
|
||||
class RabbitEnv(Environment):
|
||||
@property
|
||||
def num_rabbits(self):
|
||||
return self.count_agents(agent_class=Rabbit)
|
||||
|
||||
mating_prob = 0.005
|
||||
offspring = 0
|
||||
@property
|
||||
def num_males(self):
|
||||
return self.count_agents(agent_class=Male)
|
||||
|
||||
@property
|
||||
def num_females(self):
|
||||
return self.count_agents(agent_class=Female)
|
||||
|
||||
|
||||
class Rabbit(FSM, NetworkAgent):
|
||||
|
||||
sexual_maturity = 30
|
||||
life_expectancy = 300
|
||||
birth = None
|
||||
|
||||
sexual_maturity = 3
|
||||
life_expectancy = 30
|
||||
@property
|
||||
def age(self):
|
||||
if self.birth is None:
|
||||
return None
|
||||
return self.now - self.birth
|
||||
|
||||
@default_state
|
||||
@state
|
||||
def newborn(self):
|
||||
self.info("I am a newborn.")
|
||||
self.birth = self.now
|
||||
self.info(f'I am a newborn.')
|
||||
self.model['rabbits_alive'] = self.model.get('rabbits_alive', 0) + 1
|
||||
self.offspring = 0
|
||||
return self.youngling, Delta(self.sexual_maturity - self.age)
|
||||
|
||||
# Here we can skip the `youngling` state by using a coroutine/generator.
|
||||
while self.age < self.sexual_maturity:
|
||||
interval = self.sexual_maturity - self.age
|
||||
yield Delta(interval)
|
||||
|
||||
self.info(f'I am fertile! My age is {self.age}')
|
||||
return self.fertile
|
||||
|
||||
@property
|
||||
def age(self):
|
||||
return self.now - self.birth
|
||||
@state
|
||||
def youngling(self):
|
||||
if self.age >= self.sexual_maturity:
|
||||
self.info(f"I am fertile! My age is {self.age}")
|
||||
return self.fertile
|
||||
|
||||
@state
|
||||
def fertile(self):
|
||||
raise Exception("Each subclass should define its fertile state")
|
||||
|
||||
def step(self):
|
||||
super().step()
|
||||
if self.prob(self.age / self.life_expectancy):
|
||||
return self.die()
|
||||
@state
|
||||
def dead(self):
|
||||
self.die()
|
||||
|
||||
|
||||
class Male(RabbitModel):
|
||||
|
||||
class Male(Rabbit):
|
||||
max_females = 5
|
||||
mating_prob = 0.001
|
||||
|
||||
@state
|
||||
def fertile(self):
|
||||
# Males try to mate
|
||||
for f in self.model.agents(agent_class=Female,
|
||||
state_id=Female.fertile.id,
|
||||
limit=self.max_females):
|
||||
self.debug('Found a female:', repr(f))
|
||||
if self.prob(self['mating_prob']):
|
||||
f.impregnate(self)
|
||||
break # Take a break, don't try to impregnate the rest
|
||||
|
||||
|
||||
class Female(RabbitModel):
|
||||
due_date = None
|
||||
age_of_pregnancy = None
|
||||
gestation = 10
|
||||
mate = None
|
||||
|
||||
@state
|
||||
def fertile(self):
|
||||
return self.fertile, NEVER
|
||||
|
||||
@state
|
||||
def pregnant(self):
|
||||
self.info('I am pregnant')
|
||||
if self.age > self.life_expectancy:
|
||||
return self.dead
|
||||
|
||||
self.due_date = self.now + self.gestation
|
||||
# Males try to mate
|
||||
for f in self.model.agents(
|
||||
agent_class=Female, state_id=Female.fertile.id, limit=self.max_females
|
||||
):
|
||||
self.debug("FOUND A FEMALE: ", repr(f), self.mating_prob)
|
||||
if self.prob(self["mating_prob"]):
|
||||
f.impregnate(self)
|
||||
break # Do not try to impregnate other females
|
||||
|
||||
number_of_babies = int(8+4*self.random.random())
|
||||
|
||||
while self.now < self.due_date:
|
||||
yield When(self.due_date)
|
||||
|
||||
self.info('Having {} babies'.format(number_of_babies))
|
||||
for i in range(number_of_babies):
|
||||
agent_class = self.random.choice([Male, Female])
|
||||
child = self.model.add_node(agent_class=agent_class,
|
||||
topology=self.topology)
|
||||
self.model.add_edge(self, child)
|
||||
self.model.add_edge(self.mate, child)
|
||||
self.offspring += 1
|
||||
self.model.agents[self.mate].offspring += 1
|
||||
self.mate = None
|
||||
self.due_date = None
|
||||
return self.fertile
|
||||
class Female(Rabbit):
|
||||
gestation = 10
|
||||
conception = None
|
||||
|
||||
@state
|
||||
def dead(self):
|
||||
super().dead()
|
||||
if self.due_date is not None:
|
||||
self.info('A mother has died carrying a baby!!')
|
||||
def fertile(self):
|
||||
# Just wait for a Male
|
||||
if self.age > self.life_expectancy:
|
||||
return self.dead
|
||||
if self.conception is not None:
|
||||
return self.pregnant
|
||||
|
||||
@property
|
||||
def pregnancy(self):
|
||||
if self.conception is None:
|
||||
return None
|
||||
return self.now - self.conception
|
||||
|
||||
def impregnate(self, male):
|
||||
self.info(f'{repr(male)} impregnating female {repr(self)}')
|
||||
self.info(f"impregnated by {repr(male)}")
|
||||
self.mate = male
|
||||
self.set_state(self.pregnant, when=self.now)
|
||||
self.conception = self.now
|
||||
self.number_of_babies = int(8 + 4 * self.random.random())
|
||||
|
||||
@state
|
||||
def pregnant(self):
|
||||
self.debug("I am pregnant")
|
||||
|
||||
if self.age > self.life_expectancy:
|
||||
self.info("Dying before giving birth")
|
||||
return self.die()
|
||||
|
||||
if self.pregnancy >= self.gestation:
|
||||
self.info("Having {} babies".format(self.number_of_babies))
|
||||
for i in range(self.number_of_babies):
|
||||
state = {}
|
||||
agent_class = self.random.choice([Male, Female])
|
||||
child = self.model.add_node(agent_class=agent_class, **state)
|
||||
child.add_edge(self)
|
||||
if self.mate:
|
||||
child.add_edge(self.mate)
|
||||
self.mate.offspring += 1
|
||||
else:
|
||||
self.debug("The father has passed away")
|
||||
|
||||
self.offspring += 1
|
||||
self.mate = None
|
||||
return self.fertile
|
||||
|
||||
def die(self):
|
||||
if self.pregnancy is not None:
|
||||
self.info("A mother has died carrying a baby!!")
|
||||
return super().die()
|
||||
|
||||
|
||||
class RandomAccident(BaseAgent):
|
||||
|
||||
level = logging.INFO
|
||||
|
||||
def step(self):
|
||||
rabbits_total = self.model.topology.number_of_nodes()
|
||||
if 'rabbits_alive' not in self.model:
|
||||
self.model['rabbits_alive'] = 0
|
||||
rabbits_alive = self.model.get('rabbits_alive', rabbits_total)
|
||||
prob_death = self.model.get('prob_death', 1e-100)*math.floor(math.log10(max(1, rabbits_alive)))
|
||||
self.debug('Killing some rabbits with prob={}!'.format(prob_death))
|
||||
for i in self.model.network_agents:
|
||||
if i.state.id == i.dead.id:
|
||||
rabbits_alive = self.model.G.number_of_nodes()
|
||||
|
||||
if not rabbits_alive:
|
||||
return self.die()
|
||||
|
||||
prob_death = self.model.get("prob_death", 1e-100) * math.floor(
|
||||
math.log10(max(1, rabbits_alive))
|
||||
)
|
||||
self.debug("Killing some rabbits with prob={}!".format(prob_death))
|
||||
for i in self.iter_agents(agent_class=Rabbit):
|
||||
if i.state_id == i.dead.id:
|
||||
continue
|
||||
if self.prob(prob_death):
|
||||
self.info('I killed a rabbit: {}'.format(i.id))
|
||||
rabbits_alive = self.model['rabbits_alive'] = rabbits_alive -1
|
||||
i.set_state(i.dead)
|
||||
self.debug('Rabbits alive: {}/{}'.format(rabbits_alive, rabbits_total))
|
||||
if self.model.count_agents(state_id=RabbitModel.dead.id) == self.model.topology.number_of_nodes():
|
||||
self.die()
|
||||
self.info("I killed a rabbit: {}".format(i.id))
|
||||
rabbits_alive -= 1
|
||||
i.die()
|
||||
self.debug("Rabbits alive: {}".format(rabbits_alive))
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
from soil import easy
|
||||
|
||||
with easy("rabbits.yml") as sim:
|
||||
sim.run()
|
||||
|
||||
@@ -7,21 +7,18 @@ description: null
|
||||
group: null
|
||||
interval: 1.0
|
||||
max_time: 100
|
||||
model_class: soil.environment.Environment
|
||||
model_class: rabbit_agents.RabbitEnv
|
||||
model_params:
|
||||
agents:
|
||||
topology: default
|
||||
agent_class: rabbit_agents.RabbitModel
|
||||
topology: true
|
||||
distribution:
|
||||
- agent_class: rabbit_agents.Male
|
||||
topology: default
|
||||
weight: 1
|
||||
- agent_class: rabbit_agents.Female
|
||||
topology: default
|
||||
weight: 1
|
||||
fixed:
|
||||
- agent_class: rabbit_agents.RandomAccident
|
||||
topology: null
|
||||
topology: false
|
||||
hidden: true
|
||||
state:
|
||||
group: environment
|
||||
@@ -29,13 +26,17 @@ model_params:
|
||||
group: network
|
||||
mating_prob: 0.1
|
||||
prob_death: 0.001
|
||||
topologies:
|
||||
default:
|
||||
topology:
|
||||
directed: true
|
||||
links: []
|
||||
nodes:
|
||||
- id: 1
|
||||
- id: 0
|
||||
topology:
|
||||
fixed:
|
||||
directed: true
|
||||
links: []
|
||||
nodes:
|
||||
- id: 1
|
||||
- id: 0
|
||||
model_reporters:
|
||||
num_males: 'num_males'
|
||||
num_females: 'num_females'
|
||||
num_rabbits: |
|
||||
py:lambda env: env.num_males + env.num_females
|
||||
extra:
|
||||
visualization_params: {}
|
||||
|
||||
@@ -1,44 +1,43 @@
|
||||
'''
|
||||
"""
|
||||
Example of setting a
|
||||
Example of a fully programmatic simulation, without definition files.
|
||||
'''
|
||||
"""
|
||||
from soil import Simulation, agents
|
||||
from soil.time import Delta
|
||||
import logging
|
||||
|
||||
|
||||
|
||||
class MyAgent(agents.FSM):
|
||||
'''
|
||||
"""
|
||||
An agent that first does a ping
|
||||
'''
|
||||
"""
|
||||
|
||||
defaults = {'pong_counts': 2}
|
||||
defaults = {"pong_counts": 2}
|
||||
|
||||
@agents.default_state
|
||||
@agents.state
|
||||
def ping(self):
|
||||
self.info('Ping')
|
||||
return self.pong, Delta(self.random.expovariate(1/16))
|
||||
self.info("Ping")
|
||||
return self.pong, Delta(self.random.expovariate(1 / 16))
|
||||
|
||||
@agents.state
|
||||
def pong(self):
|
||||
self.info('Pong')
|
||||
self.info("Pong")
|
||||
self.pong_counts -= 1
|
||||
self.info(str(self.pong_counts))
|
||||
if self.pong_counts < 1:
|
||||
return self.die()
|
||||
return None, Delta(self.random.expovariate(1/16))
|
||||
return None, Delta(self.random.expovariate(1 / 16))
|
||||
|
||||
|
||||
s = Simulation(name='Programmatic',
|
||||
network_agents=[{'agent_class': MyAgent, 'id': 0}],
|
||||
topology={'nodes': [{'id': 0}], 'links': []},
|
||||
num_trials=1,
|
||||
max_time=100,
|
||||
agent_class=MyAgent,
|
||||
dry_run=True)
|
||||
s = Simulation(
|
||||
name="Programmatic",
|
||||
network_agents=[{"agent_class": MyAgent, "id": 0}],
|
||||
topology={"nodes": [{"id": 0}], "links": []},
|
||||
num_trials=1,
|
||||
max_time=100,
|
||||
agent_class=MyAgent,
|
||||
dry_run=True,
|
||||
)
|
||||
|
||||
|
||||
logging.basicConfig(level=logging.INFO)
|
||||
envs = s.run()
|
||||
|
||||
@@ -20,56 +20,83 @@ class TerroristSpreadModel(FSM, Geo):
|
||||
def __init__(self, model=None, unique_id=0, state=()):
|
||||
super().__init__(model=model, unique_id=unique_id, state=state)
|
||||
|
||||
self.information_spread_intensity = model.environment_params['information_spread_intensity']
|
||||
self.terrorist_additional_influence = model.environment_params['terrorist_additional_influence']
|
||||
self.prob_interaction = model.environment_params['prob_interaction']
|
||||
self.information_spread_intensity = model.environment_params[
|
||||
"information_spread_intensity"
|
||||
]
|
||||
self.terrorist_additional_influence = model.environment_params[
|
||||
"terrorist_additional_influence"
|
||||
]
|
||||
self.prob_interaction = model.environment_params["prob_interaction"]
|
||||
|
||||
if self['id'] == self.civilian.id: # Civilian
|
||||
if self["id"] == self.civilian.id: # Civilian
|
||||
self.mean_belief = self.random.uniform(0.00, 0.5)
|
||||
elif self['id'] == self.terrorist.id: # Terrorist
|
||||
elif self["id"] == self.terrorist.id: # Terrorist
|
||||
self.mean_belief = self.random.uniform(0.8, 1.00)
|
||||
elif self['id'] == self.leader.id: # Leader
|
||||
elif self["id"] == self.leader.id: # Leader
|
||||
self.mean_belief = 1.00
|
||||
else:
|
||||
raise Exception('Invalid state id: {}'.format(self['id']))
|
||||
|
||||
if 'min_vulnerability' in model.environment_params:
|
||||
self.vulnerability = self.random.uniform( model.environment_params['min_vulnerability'], model.environment_params['max_vulnerability'] )
|
||||
else :
|
||||
self.vulnerability = self.random.uniform( 0, model.environment_params['max_vulnerability'] )
|
||||
raise Exception("Invalid state id: {}".format(self["id"]))
|
||||
|
||||
if "min_vulnerability" in model.environment_params:
|
||||
self.vulnerability = self.random.uniform(
|
||||
model.environment_params["min_vulnerability"],
|
||||
model.environment_params["max_vulnerability"],
|
||||
)
|
||||
else:
|
||||
self.vulnerability = self.random.uniform(
|
||||
0, model.environment_params["max_vulnerability"]
|
||||
)
|
||||
|
||||
@state
|
||||
def civilian(self):
|
||||
neighbours = list(self.get_neighboring_agents(agent_class=TerroristSpreadModel))
|
||||
neighbours = list(self.get_neighbors(agent_class=TerroristSpreadModel))
|
||||
if len(neighbours) > 0:
|
||||
# Only interact with some of the neighbors
|
||||
interactions = list(n for n in neighbours if self.random.random() <= self.prob_interaction)
|
||||
influence = sum( self.degree(i) for i in interactions )
|
||||
mean_belief = sum( i.mean_belief * self.degree(i) / influence for i in interactions )
|
||||
mean_belief = mean_belief * self.information_spread_intensity + self.mean_belief * ( 1 - self.information_spread_intensity )
|
||||
self.mean_belief = mean_belief * self.vulnerability + self.mean_belief * ( 1 - self.vulnerability )
|
||||
interactions = list(
|
||||
n for n in neighbours if self.random.random() <= self.prob_interaction
|
||||
)
|
||||
influence = sum(self.degree(i) for i in interactions)
|
||||
mean_belief = sum(
|
||||
i.mean_belief * self.degree(i) / influence for i in interactions
|
||||
)
|
||||
mean_belief = (
|
||||
mean_belief * self.information_spread_intensity
|
||||
+ self.mean_belief * (1 - self.information_spread_intensity)
|
||||
)
|
||||
self.mean_belief = mean_belief * self.vulnerability + self.mean_belief * (
|
||||
1 - self.vulnerability
|
||||
)
|
||||
|
||||
if self.mean_belief >= 0.8:
|
||||
return self.terrorist
|
||||
|
||||
@state
|
||||
def leader(self):
|
||||
self.mean_belief = self.mean_belief ** ( 1 - self.terrorist_additional_influence )
|
||||
for neighbour in self.get_neighboring_agents(state_id=[self.terrorist.id, self.leader.id]):
|
||||
self.mean_belief = self.mean_belief ** (1 - self.terrorist_additional_influence)
|
||||
for neighbour in self.get_neighbors(
|
||||
state_id=[self.terrorist.id, self.leader.id]
|
||||
):
|
||||
if self.betweenness(neighbour) > self.betweenness(self):
|
||||
return self.terrorist
|
||||
|
||||
@state
|
||||
def terrorist(self):
|
||||
neighbours = self.get_agents(state_id=[self.terrorist.id, self.leader.id],
|
||||
agent_class=TerroristSpreadModel,
|
||||
limit_neighbors=True)
|
||||
neighbours = self.get_agents(
|
||||
state_id=[self.terrorist.id, self.leader.id],
|
||||
agent_class=TerroristSpreadModel,
|
||||
limit_neighbors=True,
|
||||
)
|
||||
if len(neighbours) > 0:
|
||||
influence = sum( self.degree(n) for n in neighbours )
|
||||
mean_belief = sum( n.mean_belief * self.degree(n) / influence for n in neighbours )
|
||||
mean_belief = mean_belief * self.vulnerability + self.mean_belief * ( 1 - self.vulnerability )
|
||||
self.mean_belief = self.mean_belief ** ( 1 - self.terrorist_additional_influence )
|
||||
influence = sum(self.degree(n) for n in neighbours)
|
||||
mean_belief = sum(
|
||||
n.mean_belief * self.degree(n) / influence for n in neighbours
|
||||
)
|
||||
mean_belief = mean_belief * self.vulnerability + self.mean_belief * (
|
||||
1 - self.vulnerability
|
||||
)
|
||||
self.mean_belief = self.mean_belief ** (
|
||||
1 - self.terrorist_additional_influence
|
||||
)
|
||||
|
||||
# Check if there are any leaders in the group
|
||||
leaders = list(filter(lambda x: x.state.id == self.leader.id, neighbours))
|
||||
@@ -82,21 +109,29 @@ class TerroristSpreadModel(FSM, Geo):
|
||||
return self.leader
|
||||
|
||||
def ego_search(self, steps=1, center=False, node=None, **kwargs):
|
||||
'''Get a list of nodes in the ego network of *node* of radius *steps*'''
|
||||
"""Get a list of nodes in the ego network of *node* of radius *steps*"""
|
||||
node = as_node(node if node is not None else self)
|
||||
G = self.subgraph(**kwargs)
|
||||
return nx.ego_graph(G, node, center=center, radius=steps).nodes()
|
||||
|
||||
def degree(self, node, force=False):
|
||||
node = as_node(node)
|
||||
if force or (not hasattr(self.model, '_degree')) or getattr(self.model, '_last_step', 0) < self.now:
|
||||
if (
|
||||
force
|
||||
or (not hasattr(self.model, "_degree"))
|
||||
or getattr(self.model, "_last_step", 0) < self.now
|
||||
):
|
||||
self.model._degree = nx.degree_centrality(self.G)
|
||||
self.model._last_step = self.now
|
||||
return self.model._degree[node]
|
||||
|
||||
def betweenness(self, node, force=False):
|
||||
node = as_node(node)
|
||||
if force or (not hasattr(self.model, '_betweenness')) or getattr(self.model, '_last_step', 0) < self.now:
|
||||
if (
|
||||
force
|
||||
or (not hasattr(self.model, "_betweenness"))
|
||||
or getattr(self.model, "_last_step", 0) < self.now
|
||||
):
|
||||
self.model._betweenness = nx.betweenness_centrality(self.G)
|
||||
self.model._last_step = self.now
|
||||
return self.model._betweenness[node]
|
||||
@@ -114,17 +149,20 @@ class TrainingAreaModel(FSM, Geo):
|
||||
|
||||
def __init__(self, model=None, unique_id=0, state=()):
|
||||
super().__init__(model=model, unique_id=unique_id, state=state)
|
||||
self.training_influence = model.environment_params['training_influence']
|
||||
if 'min_vulnerability' in model.environment_params:
|
||||
self.min_vulnerability = model.environment_params['min_vulnerability']
|
||||
else: self.min_vulnerability = 0
|
||||
self.training_influence = model.environment_params["training_influence"]
|
||||
if "min_vulnerability" in model.environment_params:
|
||||
self.min_vulnerability = model.environment_params["min_vulnerability"]
|
||||
else:
|
||||
self.min_vulnerability = 0
|
||||
|
||||
@default_state
|
||||
@state
|
||||
def terrorist(self):
|
||||
for neighbour in self.get_neighboring_agents(agent_class=TerroristSpreadModel):
|
||||
for neighbour in self.get_neighbors(agent_class=TerroristSpreadModel):
|
||||
if neighbour.vulnerability > self.min_vulnerability:
|
||||
neighbour.vulnerability = neighbour.vulnerability ** ( 1 - self.training_influence )
|
||||
neighbour.vulnerability = neighbour.vulnerability ** (
|
||||
1 - self.training_influence
|
||||
)
|
||||
|
||||
|
||||
class HavenModel(FSM, Geo):
|
||||
@@ -141,14 +179,15 @@ class HavenModel(FSM, Geo):
|
||||
|
||||
def __init__(self, model=None, unique_id=0, state=()):
|
||||
super().__init__(model=model, unique_id=unique_id, state=state)
|
||||
self.haven_influence = model.environment_params['haven_influence']
|
||||
if 'min_vulnerability' in model.environment_params:
|
||||
self.min_vulnerability = model.environment_params['min_vulnerability']
|
||||
else: self.min_vulnerability = 0
|
||||
self.max_vulnerability = model.environment_params['max_vulnerability']
|
||||
self.haven_influence = model.environment_params["haven_influence"]
|
||||
if "min_vulnerability" in model.environment_params:
|
||||
self.min_vulnerability = model.environment_params["min_vulnerability"]
|
||||
else:
|
||||
self.min_vulnerability = 0
|
||||
self.max_vulnerability = model.environment_params["max_vulnerability"]
|
||||
|
||||
def get_occupants(self, **kwargs):
|
||||
return self.get_neighboring_agents(agent_class=TerroristSpreadModel, **kwargs)
|
||||
return self.get_neighbors(agent_class=TerroristSpreadModel, **kwargs)
|
||||
|
||||
@state
|
||||
def civilian(self):
|
||||
@@ -158,14 +197,18 @@ class HavenModel(FSM, Geo):
|
||||
|
||||
for neighbour in self.get_occupants():
|
||||
if neighbour.vulnerability > self.min_vulnerability:
|
||||
neighbour.vulnerability = neighbour.vulnerability * ( 1 - self.haven_influence )
|
||||
neighbour.vulnerability = neighbour.vulnerability * (
|
||||
1 - self.haven_influence
|
||||
)
|
||||
return self.civilian
|
||||
|
||||
@state
|
||||
def terrorist(self):
|
||||
for neighbour in self.get_occupants():
|
||||
if neighbour.vulnerability < self.max_vulnerability:
|
||||
neighbour.vulnerability = neighbour.vulnerability ** ( 1 - self.haven_influence )
|
||||
neighbour.vulnerability = neighbour.vulnerability ** (
|
||||
1 - self.haven_influence
|
||||
)
|
||||
return self.terrorist
|
||||
|
||||
|
||||
@@ -184,10 +227,10 @@ class TerroristNetworkModel(TerroristSpreadModel):
|
||||
def __init__(self, model=None, unique_id=0, state=()):
|
||||
super().__init__(model=model, unique_id=unique_id, state=state)
|
||||
|
||||
self.vision_range = model.environment_params['vision_range']
|
||||
self.sphere_influence = model.environment_params['sphere_influence']
|
||||
self.weight_social_distance = model.environment_params['weight_social_distance']
|
||||
self.weight_link_distance = model.environment_params['weight_link_distance']
|
||||
self.vision_range = model.environment_params["vision_range"]
|
||||
self.sphere_influence = model.environment_params["sphere_influence"]
|
||||
self.weight_social_distance = model.environment_params["weight_social_distance"]
|
||||
self.weight_link_distance = model.environment_params["weight_link_distance"]
|
||||
|
||||
@state
|
||||
def terrorist(self):
|
||||
@@ -200,28 +243,49 @@ class TerroristNetworkModel(TerroristSpreadModel):
|
||||
return super().leader()
|
||||
|
||||
def update_relationships(self):
|
||||
if self.count_neighboring_agents(state_id=self.civilian.id) == 0:
|
||||
close_ups = set(self.geo_search(radius=self.vision_range, agent_class=TerroristNetworkModel))
|
||||
step_neighbours = set(self.ego_search(self.sphere_influence, agent_class=TerroristNetworkModel, center=False))
|
||||
neighbours = set(agent.id for agent in self.get_neighboring_agents(agent_class=TerroristNetworkModel))
|
||||
if self.count_neighbors(state_id=self.civilian.id) == 0:
|
||||
close_ups = set(
|
||||
self.geo_search(
|
||||
radius=self.vision_range, agent_class=TerroristNetworkModel
|
||||
)
|
||||
)
|
||||
step_neighbours = set(
|
||||
self.ego_search(
|
||||
self.sphere_influence,
|
||||
agent_class=TerroristNetworkModel,
|
||||
center=False,
|
||||
)
|
||||
)
|
||||
neighbours = set(
|
||||
agent.id
|
||||
for agent in self.get_neighbors(
|
||||
agent_class=TerroristNetworkModel
|
||||
)
|
||||
)
|
||||
search = (close_ups | step_neighbours) - neighbours
|
||||
for agent in self.get_agents(search):
|
||||
social_distance = 1 / self.shortest_path_length(agent.id)
|
||||
spatial_proximity = ( 1 - self.get_distance(agent.id) )
|
||||
prob_new_interaction = self.weight_social_distance * social_distance + self.weight_link_distance * spatial_proximity
|
||||
if agent['id'] == agent.civilian.id and self.random.random() < prob_new_interaction:
|
||||
spatial_proximity = 1 - self.get_distance(agent.id)
|
||||
prob_new_interaction = (
|
||||
self.weight_social_distance * social_distance
|
||||
+ self.weight_link_distance * spatial_proximity
|
||||
)
|
||||
if (
|
||||
agent["id"] == agent.civilian.id
|
||||
and self.random.random() < prob_new_interaction
|
||||
):
|
||||
self.add_edge(agent)
|
||||
break
|
||||
|
||||
def get_distance(self, target):
|
||||
source_x, source_y = nx.get_node_attributes(self.G, 'pos')[self.id]
|
||||
target_x, target_y = nx.get_node_attributes(self.G, 'pos')[target]
|
||||
dx = abs( source_x - target_x )
|
||||
dy = abs( source_y - target_y )
|
||||
return ( dx ** 2 + dy ** 2 ) ** ( 1 / 2 )
|
||||
source_x, source_y = nx.get_node_attributes(self.G, "pos")[self.id]
|
||||
target_x, target_y = nx.get_node_attributes(self.G, "pos")[target]
|
||||
dx = abs(source_x - target_x)
|
||||
dy = abs(source_y - target_y)
|
||||
return (dx**2 + dy**2) ** (1 / 2)
|
||||
|
||||
def shortest_path_length(self, target):
|
||||
try:
|
||||
return nx.shortest_path_length(self.G, self.id, target)
|
||||
except nx.NetworkXNoPath:
|
||||
return float('inf')
|
||||
return float("inf")
|
||||
|
||||
@@ -5,6 +5,6 @@ pyyaml>=5.1
|
||||
pandas>=1
|
||||
SALib>=1.3
|
||||
Jinja2
|
||||
Mesa>=1
|
||||
Mesa>=1.1
|
||||
pydantic>=1.9
|
||||
sqlalchemy>=1.4
|
||||
|
||||
2
setup.py
2
setup.py
@@ -53,6 +53,6 @@ setup(
|
||||
include_package_data=True,
|
||||
entry_points={
|
||||
'console_scripts':
|
||||
['soil = soil.__init__:main',
|
||||
['soil = soil.__main__:main',
|
||||
'soil-web = soil.web.__init__:main']
|
||||
})
|
||||
|
||||
@@ -1 +1 @@
|
||||
0.20.7
|
||||
0.30.0rc2
|
||||
257
soil/__init__.py
257
soil/__init__.py
@@ -5,6 +5,7 @@ import sys
|
||||
import os
|
||||
import logging
|
||||
import traceback
|
||||
from contextlib import contextmanager
|
||||
|
||||
from .version import __version__
|
||||
|
||||
@@ -16,98 +17,185 @@ except NameError:
|
||||
from .agents import *
|
||||
from . import agents
|
||||
from .simulation import *
|
||||
from .environment import Environment
|
||||
from .environment import Environment, EventedEnvironment
|
||||
from . import serialization
|
||||
from .utils import logger
|
||||
from .time import *
|
||||
|
||||
def main(cfg='simulation.yml', **kwargs):
|
||||
|
||||
def main(
|
||||
cfg="simulation.yml",
|
||||
exporters=None,
|
||||
parallel=None,
|
||||
output="soil_output",
|
||||
*,
|
||||
do_run=False,
|
||||
debug=False,
|
||||
pdb=False,
|
||||
**kwargs,
|
||||
):
|
||||
|
||||
if isinstance(cfg, Simulation):
|
||||
sim = cfg
|
||||
import argparse
|
||||
from . import simulation
|
||||
|
||||
logger.info('Running SOIL version: {}'.format(__version__))
|
||||
logger.info("Running SOIL version: {}".format(__version__))
|
||||
|
||||
parser = argparse.ArgumentParser(description='Run a SOIL simulation')
|
||||
parser.add_argument('file', type=str,
|
||||
nargs="?",
|
||||
default=cfg,
|
||||
help='Configuration file for the simulation (e.g., YAML or JSON)')
|
||||
parser.add_argument('--version', action='store_true',
|
||||
help='Show version info and exit')
|
||||
parser.add_argument('--module', '-m', type=str,
|
||||
help='file containing the code of any custom agents.')
|
||||
parser.add_argument('--dry-run', '--dry', action='store_true',
|
||||
help='Do not store the results of the simulation to disk, show in terminal instead.')
|
||||
parser.add_argument('--pdb', action='store_true',
|
||||
help='Use a pdb console in case of exception.')
|
||||
parser.add_argument('--debug', action='store_true',
|
||||
help='Run a customized version of a pdb console to debug a simulation.')
|
||||
parser.add_argument('--graph', '-g', action='store_true',
|
||||
help='Dump each trial\'s network topology as a GEXF graph. Defaults to false.')
|
||||
parser.add_argument('--csv', action='store_true',
|
||||
help='Dump all data collected in CSV format. Defaults to false.')
|
||||
parser.add_argument('--level', type=str,
|
||||
help='Logging level')
|
||||
parser.add_argument('--output', '-o', type=str, default="soil_output",
|
||||
help='folder to write results to. It defaults to the current directory.')
|
||||
parser.add_argument('--synchronous', action='store_true',
|
||||
help='Run trials serially and synchronously instead of in parallel. Defaults to false.')
|
||||
parser.add_argument('-e', '--exporter', action='append',
|
||||
help='Export environment and/or simulations using this exporter')
|
||||
parser.add_argument('--only-convert', '--convert', action='store_true',
|
||||
help='Do not run the simulation, only convert the configuration file(s) and output them.')
|
||||
parser = argparse.ArgumentParser(description="Run a SOIL simulation")
|
||||
parser.add_argument(
|
||||
"file",
|
||||
type=str,
|
||||
nargs="?",
|
||||
default=cfg if sim is None else '',
|
||||
help="Configuration file for the simulation (e.g., YAML or JSON)",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--version", action="store_true", help="Show version info and exit"
|
||||
)
|
||||
parser.add_argument(
|
||||
"--module",
|
||||
"-m",
|
||||
type=str,
|
||||
help="file containing the code of any custom agents.",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--dry-run",
|
||||
"--dry",
|
||||
action="store_true",
|
||||
help="Do not store the results of the simulation to disk, show in terminal instead.",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--pdb", action="store_true", help="Use a pdb console in case of exception."
|
||||
)
|
||||
parser.add_argument(
|
||||
"--debug",
|
||||
action="store_true",
|
||||
help="Run a customized version of a pdb console to debug a simulation.",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--graph",
|
||||
"-g",
|
||||
action="store_true",
|
||||
help="Dump each trial's network topology as a GEXF graph. Defaults to false.",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--csv",
|
||||
action="store_true",
|
||||
help="Dump all data collected in CSV format. Defaults to false.",
|
||||
)
|
||||
parser.add_argument("--level", type=str, help="Logging level")
|
||||
parser.add_argument(
|
||||
"--output",
|
||||
"-o",
|
||||
type=str,
|
||||
default=output or "soil_output",
|
||||
help="folder to write results to. It defaults to the current directory.",
|
||||
)
|
||||
if parallel is None:
|
||||
parser.add_argument(
|
||||
"--synchronous",
|
||||
action="store_true",
|
||||
help="Run trials serially and synchronously instead of in parallel. Defaults to false.",
|
||||
)
|
||||
|
||||
parser.add_argument(
|
||||
"-e",
|
||||
"--exporter",
|
||||
action="append",
|
||||
default=[],
|
||||
help="Export environment and/or simulations using this exporter",
|
||||
)
|
||||
|
||||
parser.add_argument("--set",
|
||||
metavar="KEY=VALUE",
|
||||
action='append',
|
||||
help="Set a number of parameters that will be passed to the simulation."
|
||||
"(do not put spaces before or after the = sign). "
|
||||
"If a value contains spaces, you should define "
|
||||
"it with double quotes: "
|
||||
'foo="this is a sentence". Note that '
|
||||
"values are always treated as strings.")
|
||||
parser.add_argument(
|
||||
"--only-convert",
|
||||
"--convert",
|
||||
action="store_true",
|
||||
help="Do not run the simulation, only convert the configuration file(s) and output them.",
|
||||
)
|
||||
|
||||
parser.add_argument(
|
||||
"--set",
|
||||
metavar="KEY=VALUE",
|
||||
action="append",
|
||||
help="Set a number of parameters that will be passed to the simulation."
|
||||
"(do not put spaces before or after the = sign). "
|
||||
"If a value contains spaces, you should define "
|
||||
"it with double quotes: "
|
||||
'foo="this is a sentence". Note that '
|
||||
"values are always treated as strings.",
|
||||
)
|
||||
|
||||
args = parser.parse_args()
|
||||
logger.setLevel(getattr(logging, (args.level or 'INFO').upper()))
|
||||
logger.setLevel(getattr(logging, (args.level or "INFO").upper()))
|
||||
|
||||
if args.version:
|
||||
return
|
||||
|
||||
if parallel is None:
|
||||
parallel = not args.synchronous
|
||||
|
||||
exporters = exporters or [
|
||||
"default",
|
||||
]
|
||||
for exp in args.exporter:
|
||||
if exp not in exporters:
|
||||
exporters.append(exp)
|
||||
if args.csv:
|
||||
exporters.append("csv")
|
||||
if args.graph:
|
||||
exporters.append("gexf")
|
||||
|
||||
if os.getcwd() not in sys.path:
|
||||
sys.path.append(os.getcwd())
|
||||
if args.module:
|
||||
importlib.import_module(args.module)
|
||||
if output is None:
|
||||
output = args.output
|
||||
|
||||
logger.info('Loading config file: {}'.format(args.file))
|
||||
debug = debug or args.debug
|
||||
|
||||
if args.pdb or args.debug:
|
||||
if args.pdb or debug:
|
||||
args.synchronous = True
|
||||
if args.debug:
|
||||
os.environ['SOIL_DEBUG'] = 'true'
|
||||
os.environ["SOIL_POSTMORTEM"] = "true"
|
||||
|
||||
res = []
|
||||
try:
|
||||
exporters = list(args.exporter or ['default', ])
|
||||
if args.csv:
|
||||
exporters.append('csv')
|
||||
if args.graph:
|
||||
exporters.append('gexf')
|
||||
exp_params = {}
|
||||
if args.dry_run:
|
||||
exp_params['copy_to'] = sys.stdout
|
||||
|
||||
if not os.path.exists(args.file):
|
||||
logger.error('Please, input a valid file')
|
||||
return
|
||||
for sim in simulation.iter_from_config(args.file):
|
||||
if sim:
|
||||
logger.info("Loading simulation instance")
|
||||
sim.dry_run = args.dry_run
|
||||
sim.exporters = exporters
|
||||
sim.parallel = parallel
|
||||
sim.outdir = output
|
||||
sims = [sim, ]
|
||||
else:
|
||||
logger.info("Loading config file: {}".format(args.file))
|
||||
if not os.path.exists(args.file):
|
||||
logger.error("Please, input a valid file")
|
||||
return
|
||||
|
||||
sims = list(simulation.iter_from_config(
|
||||
args.file,
|
||||
dry_run=args.dry_run,
|
||||
exporters=exporters,
|
||||
parallel=parallel,
|
||||
outdir=output,
|
||||
exporter_params=exp_params,
|
||||
**kwargs,
|
||||
))
|
||||
|
||||
for sim in sims:
|
||||
|
||||
if args.set:
|
||||
for s in args.set:
|
||||
k, v = s.split('=', 1)[:2]
|
||||
k, v = s.split("=", 1)[:2]
|
||||
v = eval(v)
|
||||
tail, *head = k.rsplit('.', 1)[::-1]
|
||||
tail, *head = k.rsplit(".", 1)[::-1]
|
||||
target = sim
|
||||
if head:
|
||||
for part in head[0].split('.'):
|
||||
for part in head[0].split("."):
|
||||
try:
|
||||
target = getattr(target, part)
|
||||
except AttributeError:
|
||||
@@ -117,30 +205,43 @@ def main(cfg='simulation.yml', **kwargs):
|
||||
except AttributeError:
|
||||
target[tail] = v
|
||||
|
||||
if args.only_convert:
|
||||
print(sim.to_yaml())
|
||||
continue
|
||||
|
||||
sim.run_simulation(dry_run=args.dry_run,
|
||||
exporters=exporters,
|
||||
parallel=(not args.synchronous),
|
||||
outdir=args.output,
|
||||
exporter_params=exp_params,
|
||||
**kwargs)
|
||||
if args.only_convert:
|
||||
print(sim.to_yaml())
|
||||
continue
|
||||
if do_run:
|
||||
res.append(sim.run())
|
||||
else:
|
||||
print("not running")
|
||||
res.append(sim)
|
||||
|
||||
except Exception as ex:
|
||||
if args.pdb:
|
||||
from .debugging import post_mortem
|
||||
|
||||
print(traceback.format_exc())
|
||||
post_mortem()
|
||||
else:
|
||||
raise
|
||||
if debug:
|
||||
from .debugging import set_trace
|
||||
|
||||
def easy(cfg, debug=False):
|
||||
sim = simulation.from_config(cfg)
|
||||
if debug or os.environ.get('SOIL_DEBUG'):
|
||||
from .debugging import setup
|
||||
setup(sys._getframe().f_back)
|
||||
return sim
|
||||
if __name__ == '__main__':
|
||||
main()
|
||||
os.environ["SOIL_DEBUG"] = "true"
|
||||
set_trace()
|
||||
return res
|
||||
|
||||
|
||||
@contextmanager
|
||||
def easy(cfg, pdb=False, debug=False, **kwargs):
|
||||
try:
|
||||
yield main(cfg, debug=debug, pdb=pdb, **kwargs)[0]
|
||||
except Exception as e:
|
||||
if os.environ.get("SOIL_POSTMORTEM"):
|
||||
from .debugging import post_mortem
|
||||
|
||||
print(traceback.format_exc())
|
||||
post_mortem()
|
||||
raise
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main(do_run=True)
|
||||
|
||||
@@ -1,4 +1,9 @@
|
||||
from . import main
|
||||
from . import main as init_main
|
||||
|
||||
if __name__ == '__main__':
|
||||
main()
|
||||
|
||||
def main():
|
||||
init_main(do_run=True)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
init_main(do_run=True)
|
||||
|
||||
@@ -7,6 +7,7 @@ class BassModel(FSM):
|
||||
innovation_prob
|
||||
imitation_prob
|
||||
"""
|
||||
|
||||
sentimentCorrelation = 0
|
||||
|
||||
def step(self):
|
||||
@@ -19,9 +20,9 @@ class BassModel(FSM):
|
||||
self.sentimentCorrelation = 1
|
||||
return self.aware
|
||||
else:
|
||||
aware_neighbors = self.get_neighboring_agents(state_id=self.aware.id)
|
||||
aware_neighbors = self.get_neighbors(state_id=self.aware.id)
|
||||
num_neighbors_aware = len(aware_neighbors)
|
||||
if self.prob((self['imitation_prob']*num_neighbors_aware)):
|
||||
if self.prob((self["imitation_prob"] * num_neighbors_aware)):
|
||||
self.sentimentCorrelation = 1
|
||||
return self.aware
|
||||
|
||||
|
||||
@@ -20,28 +20,40 @@ class BigMarketModel(FSM):
|
||||
|
||||
def __init__(self, *args, **kwargs):
|
||||
super().__init__(*args, **kwargs)
|
||||
self.enterprises = self.env.environment_params['enterprises']
|
||||
self.enterprises = self.env.environment_params["enterprises"]
|
||||
self.type = ""
|
||||
|
||||
if self.id < len(self.enterprises): # Enterprises
|
||||
self.set_state(self.enterprise.id)
|
||||
self._set_state(self.enterprise.id)
|
||||
self.type = "Enterprise"
|
||||
self.tweet_probability = environment.environment_params['tweet_probability_enterprises'][self.id]
|
||||
self.tweet_probability = environment.environment_params[
|
||||
"tweet_probability_enterprises"
|
||||
][self.id]
|
||||
else: # normal users
|
||||
self.type = "User"
|
||||
self.set_state(self.user.id)
|
||||
self.tweet_probability = environment.environment_params['tweet_probability_users']
|
||||
self.tweet_relevant_probability = environment.environment_params['tweet_relevant_probability']
|
||||
self.tweet_probability_about = environment.environment_params['tweet_probability_about'] # List
|
||||
self.sentiment_about = environment.environment_params['sentiment_about'] # List
|
||||
self._set_state(self.user.id)
|
||||
self.tweet_probability = environment.environment_params[
|
||||
"tweet_probability_users"
|
||||
]
|
||||
self.tweet_relevant_probability = environment.environment_params[
|
||||
"tweet_relevant_probability"
|
||||
]
|
||||
self.tweet_probability_about = environment.environment_params[
|
||||
"tweet_probability_about"
|
||||
] # List
|
||||
self.sentiment_about = environment.environment_params[
|
||||
"sentiment_about"
|
||||
] # List
|
||||
|
||||
@state
|
||||
def enterprise(self):
|
||||
|
||||
if self.random.random() < self.tweet_probability: # Tweets
|
||||
aware_neighbors = self.get_neighboring_agents(state_id=self.number_of_enterprises) # Nodes neighbour users
|
||||
aware_neighbors = self.get_neighbors(
|
||||
state_id=self.number_of_enterprises
|
||||
) # Nodes neighbour users
|
||||
for x in aware_neighbors:
|
||||
if self.random.uniform(0,10) < 5:
|
||||
if self.random.uniform(0, 10) < 5:
|
||||
x.sentiment_about[self.id] += 0.1 # Increments for enterprise
|
||||
else:
|
||||
x.sentiment_about[self.id] -= 0.1 # Decrements for enterprise
|
||||
@@ -49,15 +61,19 @@ class BigMarketModel(FSM):
|
||||
# Establecemos limites
|
||||
if x.sentiment_about[self.id] > 1:
|
||||
x.sentiment_about[self.id] = 1
|
||||
if x.sentiment_about[self.id]< -1:
|
||||
if x.sentiment_about[self.id] < -1:
|
||||
x.sentiment_about[self.id] = -1
|
||||
|
||||
x.attrs['sentiment_enterprise_%s'% self.enterprises[self.id]] = x.sentiment_about[self.id]
|
||||
x.attrs[
|
||||
"sentiment_enterprise_%s" % self.enterprises[self.id]
|
||||
] = x.sentiment_about[self.id]
|
||||
|
||||
@state
|
||||
def user(self):
|
||||
if self.random.random() < self.tweet_probability: # Tweets
|
||||
if self.random.random() < self.tweet_relevant_probability: # Tweets something relevant
|
||||
if (
|
||||
self.random.random() < self.tweet_relevant_probability
|
||||
): # Tweets something relevant
|
||||
# Tweet probability per enterprise
|
||||
for i in range(len(self.enterprises)):
|
||||
random_num = self.random.random()
|
||||
@@ -65,23 +81,29 @@ class BigMarketModel(FSM):
|
||||
# The condition is fulfilled, sentiments are evaluated towards that enterprise
|
||||
if self.sentiment_about[i] < 0:
|
||||
# NEGATIVO
|
||||
self.userTweets("negative",i)
|
||||
self.userTweets("negative", i)
|
||||
elif self.sentiment_about[i] == 0:
|
||||
# NEUTRO
|
||||
pass
|
||||
else:
|
||||
# POSITIVO
|
||||
self.userTweets("positive",i)
|
||||
for i in range(len(self.enterprises)): # So that it never is set to 0 if there are not changes (logs)
|
||||
self.attrs['sentiment_enterprise_%s'% self.enterprises[i]] = self.sentiment_about[i]
|
||||
self.userTweets("positive", i)
|
||||
for i in range(
|
||||
len(self.enterprises)
|
||||
): # So that it never is set to 0 if there are not changes (logs)
|
||||
self.attrs[
|
||||
"sentiment_enterprise_%s" % self.enterprises[i]
|
||||
] = self.sentiment_about[i]
|
||||
|
||||
def userTweets(self, sentiment,enterprise):
|
||||
aware_neighbors = self.get_neighboring_agents(state_id=self.number_of_enterprises) # Nodes neighbours users
|
||||
def userTweets(self, sentiment, enterprise):
|
||||
aware_neighbors = self.get_neighbors(
|
||||
state_id=self.number_of_enterprises
|
||||
) # Nodes neighbours users
|
||||
for x in aware_neighbors:
|
||||
if sentiment == "positive":
|
||||
x.sentiment_about[enterprise] +=0.003
|
||||
x.sentiment_about[enterprise] += 0.003
|
||||
elif sentiment == "negative":
|
||||
x.sentiment_about[enterprise] -=0.003
|
||||
x.sentiment_about[enterprise] -= 0.003
|
||||
else:
|
||||
pass
|
||||
|
||||
@@ -91,4 +113,6 @@ class BigMarketModel(FSM):
|
||||
if x.sentiment_about[enterprise] < -1:
|
||||
x.sentiment_about[enterprise] = -1
|
||||
|
||||
x.attrs['sentiment_enterprise_%s'% self.enterprises[enterprise]] = x.sentiment_about[enterprise]
|
||||
x.attrs[
|
||||
"sentiment_enterprise_%s" % self.enterprises[enterprise]
|
||||
] = x.sentiment_about[enterprise]
|
||||
|
||||
@@ -14,10 +14,10 @@ class CounterModel(NetworkAgent):
|
||||
def step(self):
|
||||
# Outside effects
|
||||
total = len(list(self.model.schedule._agents))
|
||||
neighbors = len(list(self.get_neighboring_agents()))
|
||||
self['times'] = self.get('times', 0) + 1
|
||||
self['neighbors'] = neighbors
|
||||
self['total'] = total
|
||||
neighbors = len(list(self.get_neighbors()))
|
||||
self["times"] = self.get("times", 0) + 1
|
||||
self["neighbors"] = neighbors
|
||||
self["total"] = total
|
||||
|
||||
|
||||
class AggregatedCounter(NetworkAgent):
|
||||
@@ -32,9 +32,9 @@ class AggregatedCounter(NetworkAgent):
|
||||
|
||||
def step(self):
|
||||
# Outside effects
|
||||
self['times'] += 1
|
||||
neighbors = len(list(self.get_neighboring_agents()))
|
||||
self['neighbors'] += neighbors
|
||||
self["times"] += 1
|
||||
neighbors = len(list(self.get_neighbors()))
|
||||
self["neighbors"] += neighbors
|
||||
total = len(list(self.model.schedule.agents))
|
||||
self['total'] += total
|
||||
self.debug('Running for step: {}. Total: {}'.format(self.now, total))
|
||||
self["total"] += total
|
||||
self.debug("Running for step: {}. Total: {}".format(self.now, total))
|
||||
|
||||
@@ -2,20 +2,20 @@ from scipy.spatial import cKDTree as KDTree
|
||||
import networkx as nx
|
||||
from . import NetworkAgent, as_node
|
||||
|
||||
|
||||
class Geo(NetworkAgent):
|
||||
'''In this type of network, nodes have a "pos" attribute.'''
|
||||
"""In this type of network, nodes have a "pos" attribute."""
|
||||
|
||||
def geo_search(self, radius, node=None, center=False, **kwargs):
|
||||
'''Get a list of nodes whose coordinates are closer than *radius* to *node*.'''
|
||||
"""Get a list of nodes whose coordinates are closer than *radius* to *node*."""
|
||||
node = as_node(node if node is not None else self)
|
||||
|
||||
G = self.subgraph(**kwargs)
|
||||
|
||||
pos = nx.get_node_attributes(G, 'pos')
|
||||
pos = nx.get_node_attributes(G, "pos")
|
||||
if not pos:
|
||||
return []
|
||||
nodes, coords = list(zip(*pos.items()))
|
||||
kdtree = KDTree(coords) # Cannot provide generator.
|
||||
indices = kdtree.query_ball_point(pos[node], radius)
|
||||
return [nodes[i] for i in indices if center or (nodes[i] != node)]
|
||||
|
||||
|
||||
@@ -11,10 +11,10 @@ class IndependentCascadeModel(BaseAgent):
|
||||
|
||||
def __init__(self, *args, **kwargs):
|
||||
super().__init__(*args, **kwargs)
|
||||
self.innovation_prob = self.env.environment_params['innovation_prob']
|
||||
self.imitation_prob = self.env.environment_params['imitation_prob']
|
||||
self.state['time_awareness'] = 0
|
||||
self.state['sentimentCorrelation'] = 0
|
||||
self.innovation_prob = self.env.environment_params["innovation_prob"]
|
||||
self.imitation_prob = self.env.environment_params["imitation_prob"]
|
||||
self.state["time_awareness"] = 0
|
||||
self.state["sentimentCorrelation"] = 0
|
||||
|
||||
def step(self):
|
||||
self.behaviour()
|
||||
@@ -23,25 +23,27 @@ class IndependentCascadeModel(BaseAgent):
|
||||
aware_neighbors_1_time_step = []
|
||||
# Outside effects
|
||||
if self.prob(self.innovation_prob):
|
||||
if self.state['id'] == 0:
|
||||
self.state['id'] = 1
|
||||
self.state['sentimentCorrelation'] = 1
|
||||
self.state['time_awareness'] = self.env.now # To know when they have been infected
|
||||
if self.state["id"] == 0:
|
||||
self.state["id"] = 1
|
||||
self.state["sentimentCorrelation"] = 1
|
||||
self.state[
|
||||
"time_awareness"
|
||||
] = self.env.now # To know when they have been infected
|
||||
else:
|
||||
pass
|
||||
|
||||
return
|
||||
|
||||
# Imitation effects
|
||||
if self.state['id'] == 0:
|
||||
aware_neighbors = self.get_neighboring_agents(state_id=1)
|
||||
if self.state["id"] == 0:
|
||||
aware_neighbors = self.get_neighbors(state_id=1)
|
||||
for x in aware_neighbors:
|
||||
if x.state['time_awareness'] == (self.env.now-1):
|
||||
if x.state["time_awareness"] == (self.env.now - 1):
|
||||
aware_neighbors_1_time_step.append(x)
|
||||
num_neighbors_aware = len(aware_neighbors_1_time_step)
|
||||
if self.prob(self.imitation_prob*num_neighbors_aware):
|
||||
self.state['id'] = 1
|
||||
self.state['sentimentCorrelation'] = 1
|
||||
if self.prob(self.imitation_prob * num_neighbors_aware):
|
||||
self.state["id"] = 1
|
||||
self.state["sentimentCorrelation"] = 1
|
||||
else:
|
||||
pass
|
||||
|
||||
|
||||
@@ -23,87 +23,100 @@ class SpreadModelM2(BaseAgent):
|
||||
def __init__(self, model=None, unique_id=0, state=()):
|
||||
super().__init__(model=environment, unique_id=unique_id, state=state)
|
||||
|
||||
|
||||
# Use a single generator with the same seed as `self.random`
|
||||
random = np.random.default_rng(seed=self._seed)
|
||||
self.prob_neutral_making_denier = random.normal(environment.environment_params['prob_neutral_making_denier'],
|
||||
environment.environment_params['standard_variance'])
|
||||
self.prob_neutral_making_denier = random.normal(
|
||||
environment.environment_params["prob_neutral_making_denier"],
|
||||
environment.environment_params["standard_variance"],
|
||||
)
|
||||
|
||||
self.prob_infect = random.normal(environment.environment_params['prob_infect'],
|
||||
environment.environment_params['standard_variance'])
|
||||
self.prob_infect = random.normal(
|
||||
environment.environment_params["prob_infect"],
|
||||
environment.environment_params["standard_variance"],
|
||||
)
|
||||
|
||||
self.prob_cured_healing_infected = random.normal(environment.environment_params['prob_cured_healing_infected'],
|
||||
environment.environment_params['standard_variance'])
|
||||
self.prob_cured_vaccinate_neutral = random.normal(environment.environment_params['prob_cured_vaccinate_neutral'],
|
||||
environment.environment_params['standard_variance'])
|
||||
self.prob_cured_healing_infected = random.normal(
|
||||
environment.environment_params["prob_cured_healing_infected"],
|
||||
environment.environment_params["standard_variance"],
|
||||
)
|
||||
self.prob_cured_vaccinate_neutral = random.normal(
|
||||
environment.environment_params["prob_cured_vaccinate_neutral"],
|
||||
environment.environment_params["standard_variance"],
|
||||
)
|
||||
|
||||
self.prob_vaccinated_healing_infected = random.normal(environment.environment_params['prob_vaccinated_healing_infected'],
|
||||
environment.environment_params['standard_variance'])
|
||||
self.prob_vaccinated_vaccinate_neutral = random.normal(environment.environment_params['prob_vaccinated_vaccinate_neutral'],
|
||||
environment.environment_params['standard_variance'])
|
||||
self.prob_generate_anti_rumor = random.normal(environment.environment_params['prob_generate_anti_rumor'],
|
||||
environment.environment_params['standard_variance'])
|
||||
self.prob_vaccinated_healing_infected = random.normal(
|
||||
environment.environment_params["prob_vaccinated_healing_infected"],
|
||||
environment.environment_params["standard_variance"],
|
||||
)
|
||||
self.prob_vaccinated_vaccinate_neutral = random.normal(
|
||||
environment.environment_params["prob_vaccinated_vaccinate_neutral"],
|
||||
environment.environment_params["standard_variance"],
|
||||
)
|
||||
self.prob_generate_anti_rumor = random.normal(
|
||||
environment.environment_params["prob_generate_anti_rumor"],
|
||||
environment.environment_params["standard_variance"],
|
||||
)
|
||||
|
||||
def step(self):
|
||||
|
||||
if self.state['id'] == 0: # Neutral
|
||||
if self.state["id"] == 0: # Neutral
|
||||
self.neutral_behaviour()
|
||||
elif self.state['id'] == 1: # Infected
|
||||
elif self.state["id"] == 1: # Infected
|
||||
self.infected_behaviour()
|
||||
elif self.state['id'] == 2: # Cured
|
||||
elif self.state["id"] == 2: # Cured
|
||||
self.cured_behaviour()
|
||||
elif self.state['id'] == 3: # Vaccinated
|
||||
elif self.state["id"] == 3: # Vaccinated
|
||||
self.vaccinated_behaviour()
|
||||
|
||||
def neutral_behaviour(self):
|
||||
|
||||
# Infected
|
||||
infected_neighbors = self.get_neighboring_agents(state_id=1)
|
||||
infected_neighbors = self.get_neighbors(state_id=1)
|
||||
if len(infected_neighbors) > 0:
|
||||
if self.prob(self.prob_neutral_making_denier):
|
||||
self.state['id'] = 3 # Vaccinated making denier
|
||||
self.state["id"] = 3 # Vaccinated making denier
|
||||
|
||||
def infected_behaviour(self):
|
||||
|
||||
# Neutral
|
||||
neutral_neighbors = self.get_neighboring_agents(state_id=0)
|
||||
neutral_neighbors = self.get_neighbors(state_id=0)
|
||||
for neighbor in neutral_neighbors:
|
||||
if self.prob(self.prob_infect):
|
||||
neighbor.state['id'] = 1 # Infected
|
||||
neighbor.state["id"] = 1 # Infected
|
||||
|
||||
def cured_behaviour(self):
|
||||
|
||||
# Vaccinate
|
||||
neutral_neighbors = self.get_neighboring_agents(state_id=0)
|
||||
neutral_neighbors = self.get_neighbors(state_id=0)
|
||||
for neighbor in neutral_neighbors:
|
||||
if self.prob(self.prob_cured_vaccinate_neutral):
|
||||
neighbor.state['id'] = 3 # Vaccinated
|
||||
neighbor.state["id"] = 3 # Vaccinated
|
||||
|
||||
# Cure
|
||||
infected_neighbors = self.get_neighboring_agents(state_id=1)
|
||||
infected_neighbors = self.get_neighbors(state_id=1)
|
||||
for neighbor in infected_neighbors:
|
||||
if self.prob(self.prob_cured_healing_infected):
|
||||
neighbor.state['id'] = 2 # Cured
|
||||
neighbor.state["id"] = 2 # Cured
|
||||
|
||||
def vaccinated_behaviour(self):
|
||||
|
||||
# Cure
|
||||
infected_neighbors = self.get_neighboring_agents(state_id=1)
|
||||
infected_neighbors = self.get_neighbors(state_id=1)
|
||||
for neighbor in infected_neighbors:
|
||||
if self.prob(self.prob_cured_healing_infected):
|
||||
neighbor.state['id'] = 2 # Cured
|
||||
neighbor.state["id"] = 2 # Cured
|
||||
|
||||
# Vaccinate
|
||||
neutral_neighbors = self.get_neighboring_agents(state_id=0)
|
||||
neutral_neighbors = self.get_neighbors(state_id=0)
|
||||
for neighbor in neutral_neighbors:
|
||||
if self.prob(self.prob_cured_vaccinate_neutral):
|
||||
neighbor.state['id'] = 3 # Vaccinated
|
||||
neighbor.state["id"] = 3 # Vaccinated
|
||||
|
||||
# Generate anti-rumor
|
||||
infected_neighbors_2 = self.get_neighboring_agents(state_id=1)
|
||||
infected_neighbors_2 = self.get_neighbors(state_id=1)
|
||||
for neighbor in infected_neighbors_2:
|
||||
if self.prob(self.prob_generate_anti_rumor):
|
||||
neighbor.state['id'] = 2 # Cured
|
||||
neighbor.state["id"] = 2 # Cured
|
||||
|
||||
|
||||
class ControlModelM2(BaseAgent):
|
||||
@@ -124,121 +137,134 @@ class ControlModelM2(BaseAgent):
|
||||
prob_generate_anti_rumor
|
||||
"""
|
||||
|
||||
|
||||
def __init__(self, model=None, unique_id=0, state=()):
|
||||
super().__init__(model=environment, unique_id=unique_id, state=state)
|
||||
|
||||
self.prob_neutral_making_denier = np.random.normal(environment.environment_params['prob_neutral_making_denier'],
|
||||
environment.environment_params['standard_variance'])
|
||||
self.prob_neutral_making_denier = np.random.normal(
|
||||
environment.environment_params["prob_neutral_making_denier"],
|
||||
environment.environment_params["standard_variance"],
|
||||
)
|
||||
|
||||
self.prob_infect = np.random.normal(environment.environment_params['prob_infect'],
|
||||
environment.environment_params['standard_variance'])
|
||||
self.prob_infect = np.random.normal(
|
||||
environment.environment_params["prob_infect"],
|
||||
environment.environment_params["standard_variance"],
|
||||
)
|
||||
|
||||
self.prob_cured_healing_infected = np.random.normal(environment.environment_params['prob_cured_healing_infected'],
|
||||
environment.environment_params['standard_variance'])
|
||||
self.prob_cured_vaccinate_neutral = np.random.normal(environment.environment_params['prob_cured_vaccinate_neutral'],
|
||||
environment.environment_params['standard_variance'])
|
||||
self.prob_cured_healing_infected = np.random.normal(
|
||||
environment.environment_params["prob_cured_healing_infected"],
|
||||
environment.environment_params["standard_variance"],
|
||||
)
|
||||
self.prob_cured_vaccinate_neutral = np.random.normal(
|
||||
environment.environment_params["prob_cured_vaccinate_neutral"],
|
||||
environment.environment_params["standard_variance"],
|
||||
)
|
||||
|
||||
self.prob_vaccinated_healing_infected = np.random.normal(environment.environment_params['prob_vaccinated_healing_infected'],
|
||||
environment.environment_params['standard_variance'])
|
||||
self.prob_vaccinated_vaccinate_neutral = np.random.normal(environment.environment_params['prob_vaccinated_vaccinate_neutral'],
|
||||
environment.environment_params['standard_variance'])
|
||||
self.prob_generate_anti_rumor = np.random.normal(environment.environment_params['prob_generate_anti_rumor'],
|
||||
environment.environment_params['standard_variance'])
|
||||
self.prob_vaccinated_healing_infected = np.random.normal(
|
||||
environment.environment_params["prob_vaccinated_healing_infected"],
|
||||
environment.environment_params["standard_variance"],
|
||||
)
|
||||
self.prob_vaccinated_vaccinate_neutral = np.random.normal(
|
||||
environment.environment_params["prob_vaccinated_vaccinate_neutral"],
|
||||
environment.environment_params["standard_variance"],
|
||||
)
|
||||
self.prob_generate_anti_rumor = np.random.normal(
|
||||
environment.environment_params["prob_generate_anti_rumor"],
|
||||
environment.environment_params["standard_variance"],
|
||||
)
|
||||
|
||||
def step(self):
|
||||
|
||||
if self.state['id'] == 0: # Neutral
|
||||
if self.state["id"] == 0: # Neutral
|
||||
self.neutral_behaviour()
|
||||
elif self.state['id'] == 1: # Infected
|
||||
elif self.state["id"] == 1: # Infected
|
||||
self.infected_behaviour()
|
||||
elif self.state['id'] == 2: # Cured
|
||||
elif self.state["id"] == 2: # Cured
|
||||
self.cured_behaviour()
|
||||
elif self.state['id'] == 3: # Vaccinated
|
||||
elif self.state["id"] == 3: # Vaccinated
|
||||
self.vaccinated_behaviour()
|
||||
elif self.state['id'] == 4: # Beacon-off
|
||||
elif self.state["id"] == 4: # Beacon-off
|
||||
self.beacon_off_behaviour()
|
||||
elif self.state['id'] == 5: # Beacon-on
|
||||
elif self.state["id"] == 5: # Beacon-on
|
||||
self.beacon_on_behaviour()
|
||||
|
||||
def neutral_behaviour(self):
|
||||
self.state['visible'] = False
|
||||
self.state["visible"] = False
|
||||
|
||||
# Infected
|
||||
infected_neighbors = self.get_neighboring_agents(state_id=1)
|
||||
infected_neighbors = self.get_neighbors(state_id=1)
|
||||
if len(infected_neighbors) > 0:
|
||||
if self.random(self.prob_neutral_making_denier):
|
||||
self.state['id'] = 3 # Vaccinated making denier
|
||||
self.state["id"] = 3 # Vaccinated making denier
|
||||
|
||||
def infected_behaviour(self):
|
||||
|
||||
# Neutral
|
||||
neutral_neighbors = self.get_neighboring_agents(state_id=0)
|
||||
neutral_neighbors = self.get_neighbors(state_id=0)
|
||||
for neighbor in neutral_neighbors:
|
||||
if self.prob(self.prob_infect):
|
||||
neighbor.state['id'] = 1 # Infected
|
||||
self.state['visible'] = False
|
||||
neighbor.state["id"] = 1 # Infected
|
||||
self.state["visible"] = False
|
||||
|
||||
def cured_behaviour(self):
|
||||
|
||||
self.state['visible'] = True
|
||||
self.state["visible"] = True
|
||||
# Vaccinate
|
||||
neutral_neighbors = self.get_neighboring_agents(state_id=0)
|
||||
neutral_neighbors = self.get_neighbors(state_id=0)
|
||||
for neighbor in neutral_neighbors:
|
||||
if self.prob(self.prob_cured_vaccinate_neutral):
|
||||
neighbor.state['id'] = 3 # Vaccinated
|
||||
neighbor.state["id"] = 3 # Vaccinated
|
||||
|
||||
# Cure
|
||||
infected_neighbors = self.get_neighboring_agents(state_id=1)
|
||||
infected_neighbors = self.get_neighbors(state_id=1)
|
||||
for neighbor in infected_neighbors:
|
||||
if self.prob(self.prob_cured_healing_infected):
|
||||
neighbor.state['id'] = 2 # Cured
|
||||
neighbor.state["id"] = 2 # Cured
|
||||
|
||||
def vaccinated_behaviour(self):
|
||||
self.state['visible'] = True
|
||||
self.state["visible"] = True
|
||||
|
||||
# Cure
|
||||
infected_neighbors = self.get_neighboring_agents(state_id=1)
|
||||
infected_neighbors = self.get_neighbors(state_id=1)
|
||||
for neighbor in infected_neighbors:
|
||||
if self.prob(self.prob_cured_healing_infected):
|
||||
neighbor.state['id'] = 2 # Cured
|
||||
neighbor.state["id"] = 2 # Cured
|
||||
|
||||
# Vaccinate
|
||||
neutral_neighbors = self.get_neighboring_agents(state_id=0)
|
||||
neutral_neighbors = self.get_neighbors(state_id=0)
|
||||
for neighbor in neutral_neighbors:
|
||||
if self.prob(self.prob_cured_vaccinate_neutral):
|
||||
neighbor.state['id'] = 3 # Vaccinated
|
||||
neighbor.state["id"] = 3 # Vaccinated
|
||||
|
||||
# Generate anti-rumor
|
||||
infected_neighbors_2 = self.get_neighboring_agents(state_id=1)
|
||||
infected_neighbors_2 = self.get_neighbors(state_id=1)
|
||||
for neighbor in infected_neighbors_2:
|
||||
if self.prob(self.prob_generate_anti_rumor):
|
||||
neighbor.state['id'] = 2 # Cured
|
||||
neighbor.state["id"] = 2 # Cured
|
||||
|
||||
def beacon_off_behaviour(self):
|
||||
self.state['visible'] = False
|
||||
infected_neighbors = self.get_neighboring_agents(state_id=1)
|
||||
self.state["visible"] = False
|
||||
infected_neighbors = self.get_neighbors(state_id=1)
|
||||
if len(infected_neighbors) > 0:
|
||||
self.state['id'] == 5 # Beacon on
|
||||
self.state["id"] == 5 # Beacon on
|
||||
|
||||
def beacon_on_behaviour(self):
|
||||
self.state['visible'] = False
|
||||
self.state["visible"] = False
|
||||
# Cure (M2 feature added)
|
||||
infected_neighbors = self.get_neighboring_agents(state_id=1)
|
||||
infected_neighbors = self.get_neighbors(state_id=1)
|
||||
for neighbor in infected_neighbors:
|
||||
if self.prob(self.prob_generate_anti_rumor):
|
||||
neighbor.state['id'] = 2 # Cured
|
||||
neutral_neighbors_infected = neighbor.get_neighboring_agents(state_id=0)
|
||||
neighbor.state["id"] = 2 # Cured
|
||||
neutral_neighbors_infected = neighbor.get_neighbors(state_id=0)
|
||||
for neighbor in neutral_neighbors_infected:
|
||||
if self.prob(self.prob_generate_anti_rumor):
|
||||
neighbor.state['id'] = 3 # Vaccinated
|
||||
infected_neighbors_infected = neighbor.get_neighboring_agents(state_id=1)
|
||||
neighbor.state["id"] = 3 # Vaccinated
|
||||
infected_neighbors_infected = neighbor.get_neighbors(state_id=1)
|
||||
for neighbor in infected_neighbors_infected:
|
||||
if self.prob(self.prob_generate_anti_rumor):
|
||||
neighbor.state['id'] = 2 # Cured
|
||||
neighbor.state["id"] = 2 # Cured
|
||||
|
||||
# Vaccinate
|
||||
neutral_neighbors = self.get_neighboring_agents(state_id=0)
|
||||
neutral_neighbors = self.get_neighbors(state_id=0)
|
||||
for neighbor in neutral_neighbors:
|
||||
if self.prob(self.prob_cured_vaccinate_neutral):
|
||||
neighbor.state['id'] = 3 # Vaccinated
|
||||
neighbor.state["id"] = 3 # Vaccinated
|
||||
|
||||
@@ -33,24 +33,32 @@ class SISaModel(FSM):
|
||||
|
||||
random = np.random.default_rng(seed=self._seed)
|
||||
|
||||
self.neutral_discontent_spon_prob = random.normal(self.env['neutral_discontent_spon_prob'],
|
||||
self.env['standard_variance'])
|
||||
self.neutral_discontent_infected_prob = random.normal(self.env['neutral_discontent_infected_prob'],
|
||||
self.env['standard_variance'])
|
||||
self.neutral_content_spon_prob = random.normal(self.env['neutral_content_spon_prob'],
|
||||
self.env['standard_variance'])
|
||||
self.neutral_content_infected_prob = random.normal(self.env['neutral_content_infected_prob'],
|
||||
self.env['standard_variance'])
|
||||
self.neutral_discontent_spon_prob = random.normal(
|
||||
self.env["neutral_discontent_spon_prob"], self.env["standard_variance"]
|
||||
)
|
||||
self.neutral_discontent_infected_prob = random.normal(
|
||||
self.env["neutral_discontent_infected_prob"], self.env["standard_variance"]
|
||||
)
|
||||
self.neutral_content_spon_prob = random.normal(
|
||||
self.env["neutral_content_spon_prob"], self.env["standard_variance"]
|
||||
)
|
||||
self.neutral_content_infected_prob = random.normal(
|
||||
self.env["neutral_content_infected_prob"], self.env["standard_variance"]
|
||||
)
|
||||
|
||||
self.discontent_neutral = random.normal(self.env['discontent_neutral'],
|
||||
self.env['standard_variance'])
|
||||
self.discontent_content = random.normal(self.env['discontent_content'],
|
||||
self.env['variance_d_c'])
|
||||
self.discontent_neutral = random.normal(
|
||||
self.env["discontent_neutral"], self.env["standard_variance"]
|
||||
)
|
||||
self.discontent_content = random.normal(
|
||||
self.env["discontent_content"], self.env["variance_d_c"]
|
||||
)
|
||||
|
||||
self.content_discontent = random.normal(self.env['content_discontent'],
|
||||
self.env['variance_c_d'])
|
||||
self.content_neutral = random.normal(self.env['content_neutral'],
|
||||
self.env['standard_variance'])
|
||||
self.content_discontent = random.normal(
|
||||
self.env["content_discontent"], self.env["variance_c_d"]
|
||||
)
|
||||
self.content_neutral = random.normal(
|
||||
self.env["content_neutral"], self.env["standard_variance"]
|
||||
)
|
||||
|
||||
@state
|
||||
def neutral(self):
|
||||
@@ -61,10 +69,10 @@ class SISaModel(FSM):
|
||||
return self.content
|
||||
|
||||
# Infected
|
||||
discontent_neighbors = self.count_neighboring_agents(state_id=self.discontent)
|
||||
discontent_neighbors = self.count_neighbors(state_id=self.discontent)
|
||||
if self.prob(scontent_neighbors * self.neutral_discontent_infected_prob):
|
||||
return self.discontent
|
||||
content_neighbors = self.count_neighboring_agents(state_id=self.content.id)
|
||||
content_neighbors = self.count_neighbors(state_id=self.content.id)
|
||||
if self.prob(s * self.neutral_content_infected_prob):
|
||||
return self.content
|
||||
return self.neutral
|
||||
@@ -76,7 +84,7 @@ class SISaModel(FSM):
|
||||
return self.neutral
|
||||
|
||||
# Superinfected
|
||||
content_neighbors = self.count_neighboring_agents(state_id=self.content.id)
|
||||
content_neighbors = self.count_neighbors(state_id=self.content.id)
|
||||
if self.prob(s * self.discontent_content):
|
||||
return self.content
|
||||
return self.discontent
|
||||
@@ -88,7 +96,7 @@ class SISaModel(FSM):
|
||||
return self.neutral
|
||||
|
||||
# Superinfected
|
||||
discontent_neighbors = self.count_neighboring_agents(state_id=self.discontent.id)
|
||||
discontent_neighbors = self.count_neighbors(state_id=self.discontent.id)
|
||||
if self.prob(scontent_neighbors * self.content_discontent):
|
||||
self.discontent
|
||||
return self.content
|
||||
|
||||
@@ -17,15 +17,19 @@ class SentimentCorrelationModel(BaseAgent):
|
||||
|
||||
def __init__(self, environment, unique_id=0, state=()):
|
||||
super().__init__(model=environment, unique_id=unique_id, state=state)
|
||||
self.outside_effects_prob = environment.environment_params['outside_effects_prob']
|
||||
self.anger_prob = environment.environment_params['anger_prob']
|
||||
self.joy_prob = environment.environment_params['joy_prob']
|
||||
self.sadness_prob = environment.environment_params['sadness_prob']
|
||||
self.disgust_prob = environment.environment_params['disgust_prob']
|
||||
self.state['time_awareness'] = []
|
||||
self.outside_effects_prob = environment.environment_params[
|
||||
"outside_effects_prob"
|
||||
]
|
||||
self.anger_prob = environment.environment_params["anger_prob"]
|
||||
self.joy_prob = environment.environment_params["joy_prob"]
|
||||
self.sadness_prob = environment.environment_params["sadness_prob"]
|
||||
self.disgust_prob = environment.environment_params["disgust_prob"]
|
||||
self.state["time_awareness"] = []
|
||||
for i in range(4): # In this model we have 4 sentiments
|
||||
self.state['time_awareness'].append(0) # 0-> Anger, 1-> joy, 2->sadness, 3 -> disgust
|
||||
self.state['sentimentCorrelation'] = 0
|
||||
self.state["time_awareness"].append(
|
||||
0
|
||||
) # 0-> Anger, 1-> joy, 2->sadness, 3 -> disgust
|
||||
self.state["sentimentCorrelation"] = 0
|
||||
|
||||
def step(self):
|
||||
self.behaviour()
|
||||
@@ -37,65 +41,75 @@ class SentimentCorrelationModel(BaseAgent):
|
||||
sad_neighbors_1_time_step = []
|
||||
disgusted_neighbors_1_time_step = []
|
||||
|
||||
angry_neighbors = self.get_neighboring_agents(state_id=1)
|
||||
angry_neighbors = self.get_neighbors(state_id=1)
|
||||
for x in angry_neighbors:
|
||||
if x.state['time_awareness'][0] > (self.env.now-500):
|
||||
if x.state["time_awareness"][0] > (self.env.now - 500):
|
||||
angry_neighbors_1_time_step.append(x)
|
||||
num_neighbors_angry = len(angry_neighbors_1_time_step)
|
||||
|
||||
joyful_neighbors = self.get_neighboring_agents(state_id=2)
|
||||
joyful_neighbors = self.get_neighbors(state_id=2)
|
||||
for x in joyful_neighbors:
|
||||
if x.state['time_awareness'][1] > (self.env.now-500):
|
||||
if x.state["time_awareness"][1] > (self.env.now - 500):
|
||||
joyful_neighbors_1_time_step.append(x)
|
||||
num_neighbors_joyful = len(joyful_neighbors_1_time_step)
|
||||
|
||||
sad_neighbors = self.get_neighboring_agents(state_id=3)
|
||||
sad_neighbors = self.get_neighbors(state_id=3)
|
||||
for x in sad_neighbors:
|
||||
if x.state['time_awareness'][2] > (self.env.now-500):
|
||||
if x.state["time_awareness"][2] > (self.env.now - 500):
|
||||
sad_neighbors_1_time_step.append(x)
|
||||
num_neighbors_sad = len(sad_neighbors_1_time_step)
|
||||
|
||||
disgusted_neighbors = self.get_neighboring_agents(state_id=4)
|
||||
disgusted_neighbors = self.get_neighbors(state_id=4)
|
||||
for x in disgusted_neighbors:
|
||||
if x.state['time_awareness'][3] > (self.env.now-500):
|
||||
if x.state["time_awareness"][3] > (self.env.now - 500):
|
||||
disgusted_neighbors_1_time_step.append(x)
|
||||
num_neighbors_disgusted = len(disgusted_neighbors_1_time_step)
|
||||
|
||||
anger_prob = self.anger_prob+(len(angry_neighbors_1_time_step)*self.anger_prob)
|
||||
joy_prob = self.joy_prob+(len(joyful_neighbors_1_time_step)*self.joy_prob)
|
||||
sadness_prob = self.sadness_prob+(len(sad_neighbors_1_time_step)*self.sadness_prob)
|
||||
disgust_prob = self.disgust_prob+(len(disgusted_neighbors_1_time_step)*self.disgust_prob)
|
||||
anger_prob = self.anger_prob + (
|
||||
len(angry_neighbors_1_time_step) * self.anger_prob
|
||||
)
|
||||
joy_prob = self.joy_prob + (len(joyful_neighbors_1_time_step) * self.joy_prob)
|
||||
sadness_prob = self.sadness_prob + (
|
||||
len(sad_neighbors_1_time_step) * self.sadness_prob
|
||||
)
|
||||
disgust_prob = self.disgust_prob + (
|
||||
len(disgusted_neighbors_1_time_step) * self.disgust_prob
|
||||
)
|
||||
outside_effects_prob = self.outside_effects_prob
|
||||
|
||||
num = self.random.random()
|
||||
|
||||
if num<outside_effects_prob:
|
||||
self.state['id'] = self.random.randint(1, 4)
|
||||
if num < outside_effects_prob:
|
||||
self.state["id"] = self.random.randint(1, 4)
|
||||
|
||||
self.state['sentimentCorrelation'] = self.state['id'] # It is stored when it has been infected for the dynamic network
|
||||
self.state['time_awareness'][self.state['id']-1] = self.env.now
|
||||
self.state['sentiment'] = self.state['id']
|
||||
self.state["sentimentCorrelation"] = self.state[
|
||||
"id"
|
||||
] # It is stored when it has been infected for the dynamic network
|
||||
self.state["time_awareness"][self.state["id"] - 1] = self.env.now
|
||||
self.state["sentiment"] = self.state["id"]
|
||||
|
||||
if num < anger_prob:
|
||||
|
||||
if(num<anger_prob):
|
||||
self.state["id"] = 1
|
||||
self.state["sentimentCorrelation"] = 1
|
||||
self.state["time_awareness"][self.state["id"] - 1] = self.env.now
|
||||
elif num < joy_prob + anger_prob and num > anger_prob:
|
||||
|
||||
self.state['id'] = 1
|
||||
self.state['sentimentCorrelation'] = 1
|
||||
self.state['time_awareness'][self.state['id']-1] = self.env.now
|
||||
elif (num<joy_prob+anger_prob and num>anger_prob):
|
||||
self.state["id"] = 2
|
||||
self.state["sentimentCorrelation"] = 2
|
||||
self.state["time_awareness"][self.state["id"] - 1] = self.env.now
|
||||
elif num < sadness_prob + anger_prob + joy_prob and num > joy_prob + anger_prob:
|
||||
|
||||
self.state['id'] = 2
|
||||
self.state['sentimentCorrelation'] = 2
|
||||
self.state['time_awareness'][self.state['id']-1] = self.env.now
|
||||
elif (num<sadness_prob+anger_prob+joy_prob and num>joy_prob+anger_prob):
|
||||
self.state["id"] = 3
|
||||
self.state["sentimentCorrelation"] = 3
|
||||
self.state["time_awareness"][self.state["id"] - 1] = self.env.now
|
||||
elif (
|
||||
num < disgust_prob + sadness_prob + anger_prob + joy_prob
|
||||
and num > sadness_prob + anger_prob + joy_prob
|
||||
):
|
||||
|
||||
self.state['id'] = 3
|
||||
self.state['sentimentCorrelation'] = 3
|
||||
self.state['time_awareness'][self.state['id']-1] = self.env.now
|
||||
elif (num<disgust_prob+sadness_prob+anger_prob+joy_prob and num>sadness_prob+anger_prob+joy_prob):
|
||||
self.state["id"] = 4
|
||||
self.state["sentimentCorrelation"] = 4
|
||||
self.state["time_awareness"][self.state["id"] - 1] = self.env.now
|
||||
|
||||
self.state['id'] = 4
|
||||
self.state['sentimentCorrelation'] = 4
|
||||
self.state['time_awareness'][self.state['id']-1] = self.env.now
|
||||
|
||||
self.state['sentiment'] = self.state['id']
|
||||
self.state["sentiment"] = self.state["id"]
|
||||
|
||||
@@ -20,17 +20,13 @@ from typing import Dict, List
|
||||
from .. import serialization, utils, time, config
|
||||
|
||||
|
||||
|
||||
def as_node(agent):
|
||||
if isinstance(agent, BaseAgent):
|
||||
return agent.id
|
||||
return agent
|
||||
|
||||
IGNORED_FIELDS = ('model', 'logger')
|
||||
|
||||
|
||||
class DeadAgent(Exception):
|
||||
pass
|
||||
IGNORED_FIELDS = ("model", "logger")
|
||||
|
||||
|
||||
class MetaAgent(ABCMeta):
|
||||
@@ -43,13 +39,44 @@ class MetaAgent(ABCMeta):
|
||||
defaults.update(i._defaults)
|
||||
|
||||
new_nmspc = {
|
||||
'_defaults': defaults,
|
||||
"_defaults": defaults,
|
||||
"_last_return": None,
|
||||
"_last_except": None,
|
||||
}
|
||||
|
||||
for attr, func in namespace.items():
|
||||
if isinstance(func, types.FunctionType) or isinstance(func, property) or attr[0] == '_':
|
||||
if attr == "step" and inspect.isgeneratorfunction(func):
|
||||
orig_func = func
|
||||
new_nmspc["_coroutine"] = None
|
||||
|
||||
@wraps(func)
|
||||
def func(self):
|
||||
while True:
|
||||
if not self._coroutine:
|
||||
self._coroutine = orig_func(self)
|
||||
try:
|
||||
if self._last_except:
|
||||
return self._coroutine.throw(self._last_except)
|
||||
else:
|
||||
return self._coroutine.send(self._last_return)
|
||||
except StopIteration as ex:
|
||||
self._coroutine = None
|
||||
return ex.value
|
||||
finally:
|
||||
self._last_return = None
|
||||
self._last_except = None
|
||||
|
||||
func.id = name or func.__name__
|
||||
func.is_default = False
|
||||
new_nmspc[attr] = func
|
||||
elif attr == 'defaults':
|
||||
elif (
|
||||
isinstance(func, types.FunctionType)
|
||||
or isinstance(func, property)
|
||||
or isinstance(func, classmethod)
|
||||
or attr[0] == "_"
|
||||
):
|
||||
new_nmspc[attr] = func
|
||||
elif attr == "defaults":
|
||||
defaults.update(func)
|
||||
else:
|
||||
defaults[attr] = copy(func)
|
||||
@@ -69,12 +96,7 @@ class BaseAgent(MesaAgent, MutableMapping, metaclass=MetaAgent):
|
||||
Any attribute that is not preceded by an underscore (`_`) will also be added to its state.
|
||||
"""
|
||||
|
||||
def __init__(self,
|
||||
unique_id,
|
||||
model,
|
||||
name=None,
|
||||
interval=None,
|
||||
**kwargs):
|
||||
def __init__(self, unique_id, model, name=None, interval=None, **kwargs):
|
||||
# Check for REQUIRED arguments
|
||||
# Initialize agent parameters
|
||||
if isinstance(unique_id, MesaAgent):
|
||||
@@ -82,16 +104,19 @@ class BaseAgent(MesaAgent, MutableMapping, metaclass=MetaAgent):
|
||||
assert isinstance(unique_id, int)
|
||||
super().__init__(unique_id=unique_id, model=model)
|
||||
|
||||
self.name = str(name) if name else'{}[{}]'.format(type(self).__name__, self.unique_id)
|
||||
|
||||
self.name = (
|
||||
str(name) if name else "{}[{}]".format(type(self).__name__, self.unique_id)
|
||||
)
|
||||
|
||||
self.alive = True
|
||||
|
||||
self.interval = interval or self.get('interval', 1)
|
||||
logger = utils.logger.getChild(getattr(self.model, 'id', self.model)).getChild(self.name)
|
||||
self.logger = logging.LoggerAdapter(logger, {'agent_name': self.name})
|
||||
self.interval = interval or self.get("interval", 1)
|
||||
logger = utils.logger.getChild(getattr(self.model, "id", self.model)).getChild(
|
||||
self.name
|
||||
)
|
||||
self.logger = logging.LoggerAdapter(logger, {"agent_name": self.name})
|
||||
|
||||
if hasattr(self, 'level'):
|
||||
if hasattr(self, "level"):
|
||||
self.logger.setLevel(self.level)
|
||||
|
||||
for (k, v) in self._defaults.items():
|
||||
@@ -113,27 +138,26 @@ class BaseAgent(MesaAgent, MutableMapping, metaclass=MetaAgent):
|
||||
def id(self):
|
||||
return self.unique_id
|
||||
|
||||
@property
|
||||
def state(self):
|
||||
'''
|
||||
Return the agent itself, which behaves as a dictionary.
|
||||
|
||||
This method shouldn't be used, but is kept here for backwards compatibility.
|
||||
'''
|
||||
return self
|
||||
|
||||
@state.setter
|
||||
def state(self, value):
|
||||
if not value:
|
||||
return
|
||||
for k, v in value.items():
|
||||
self[k] = v
|
||||
@classmethod
|
||||
def from_dict(cls, model, attrs, warn_extra=True):
|
||||
ignored = {}
|
||||
args = {}
|
||||
for k, v in attrs.items():
|
||||
if k in inspect.signature(cls).parameters:
|
||||
args[k] = v
|
||||
else:
|
||||
ignored[k] = v
|
||||
if ignored and warn_extra:
|
||||
utils.logger.info(
|
||||
f"Ignoring the following arguments for agent class { agent_class.__name__ }: { ignored }"
|
||||
)
|
||||
return cls(model=model, **args)
|
||||
|
||||
def __getitem__(self, key):
|
||||
try:
|
||||
return getattr(self, key)
|
||||
except AttributeError:
|
||||
raise KeyError(f'key {key} not found in agent')
|
||||
raise KeyError(f"key {key} not found in agent")
|
||||
|
||||
def __delitem__(self, key):
|
||||
return delattr(self, key)
|
||||
@@ -151,7 +175,7 @@ class BaseAgent(MesaAgent, MutableMapping, metaclass=MetaAgent):
|
||||
return self.items()
|
||||
|
||||
def keys(self):
|
||||
return (k for k in self.__dict__ if k[0] != '_' and k not in IGNORED_FIELDS)
|
||||
return (k for k in self.__dict__ if k[0] != "_" and k not in IGNORED_FIELDS)
|
||||
|
||||
def items(self, keys=None, skip=None):
|
||||
keys = keys if keys is not None else self.keys()
|
||||
@@ -172,13 +196,17 @@ class BaseAgent(MesaAgent, MutableMapping, metaclass=MetaAgent):
|
||||
return None
|
||||
|
||||
def die(self):
|
||||
self.info(f'agent dying')
|
||||
self.info(f"agent dying")
|
||||
self.alive = False
|
||||
try:
|
||||
self.model.schedule.remove(self)
|
||||
except KeyError:
|
||||
pass
|
||||
return time.NEVER
|
||||
|
||||
def step(self):
|
||||
if not self.alive:
|
||||
raise DeadAgent(self.unique_id)
|
||||
raise time.DeadAgent(self.unique_id)
|
||||
return super().step() or time.Delta(self.interval)
|
||||
|
||||
def log(self, message, *args, level=logging.INFO, **kwargs):
|
||||
@@ -189,9 +217,9 @@ class BaseAgent(MesaAgent, MutableMapping, metaclass=MetaAgent):
|
||||
for k, v in kwargs:
|
||||
message += " {k}={v} ".format(k, v)
|
||||
extra = {}
|
||||
extra['now'] = self.now
|
||||
extra['unique_id'] = self.unique_id
|
||||
extra['agent_name'] = self.name
|
||||
extra["now"] = self.now
|
||||
extra["unique_id"] = self.unique_id
|
||||
extra["agent_name"] = self.name
|
||||
return self.logger.log(level, message, extra=extra)
|
||||
|
||||
def debug(self, *args, **kwargs):
|
||||
@@ -217,198 +245,18 @@ class BaseAgent(MesaAgent, MutableMapping, metaclass=MetaAgent):
|
||||
content = dict(self.items(keys=keys))
|
||||
if pretty and content:
|
||||
d = content
|
||||
content = '\n'
|
||||
content = "\n"
|
||||
for k, v in d.items():
|
||||
content += f'- {k}: {v}\n'
|
||||
content = textwrap.indent(content, ' ')
|
||||
content += f"- {k}: {v}\n"
|
||||
content = textwrap.indent(content, " ")
|
||||
return f"{repr(self)}{content}"
|
||||
|
||||
def __repr__(self):
|
||||
return f"{self.__class__.__name__}({self.unique_id})"
|
||||
|
||||
|
||||
class NetworkAgent(BaseAgent):
|
||||
|
||||
def __init__(self, *args, topology, node_id, **kwargs):
|
||||
super().__init__(*args, **kwargs)
|
||||
|
||||
self.topology = topology
|
||||
self.node_id = node_id
|
||||
self.G = self.model.topologies[topology]
|
||||
assert self.G
|
||||
|
||||
def count_neighboring_agents(self, state_id=None, **kwargs):
|
||||
return len(self.get_neighboring_agents(state_id=state_id, **kwargs))
|
||||
|
||||
def get_neighboring_agents(self, state_id=None, **kwargs):
|
||||
return self.get_agents(limit_neighbors=True, state_id=state_id, **kwargs)
|
||||
|
||||
def iter_agents(self, unique_id=None, limit_neighbors=False, **kwargs):
|
||||
unique_ids = None
|
||||
if isinstance(unique_id, list):
|
||||
unique_ids = set(unique_id)
|
||||
elif unique_id is not None:
|
||||
unique_ids = set([unique_id,])
|
||||
|
||||
if limit_neighbors:
|
||||
neighbor_ids = set()
|
||||
for node_id in self.G.neighbors(self.node_id):
|
||||
if self.G.nodes[node_id].get('agent_id') is not None:
|
||||
neighbor_ids.add(node_id)
|
||||
if unique_ids:
|
||||
unique_ids = unique_ids & neighbor_ids
|
||||
else:
|
||||
unique_ids = neighbor_ids
|
||||
if not unique_ids:
|
||||
return
|
||||
unique_ids = list(unique_ids)
|
||||
yield from super().iter_agents(unique_id=unique_ids, **kwargs)
|
||||
|
||||
def subgraph(self, center=True, **kwargs):
|
||||
include = [self] if center else []
|
||||
G = self.G.subgraph(n.node_id for n in list(self.get_agents(**kwargs)+include))
|
||||
return G
|
||||
|
||||
def remove_node(self):
|
||||
self.G.remove_node(self.node_id)
|
||||
|
||||
def add_edge(self, other, edge_attr_dict=None, *edge_attrs):
|
||||
if self.node_id not in self.G.nodes(data=False):
|
||||
raise ValueError('{} not in list of existing agents in the network'.format(self.unique_id))
|
||||
if other.node_id not in self.G.nodes(data=False):
|
||||
raise ValueError('{} not in list of existing agents in the network'.format(other))
|
||||
|
||||
self.G.add_edge(self.node_id, other.node_id, edge_attr_dict=edge_attr_dict, *edge_attrs)
|
||||
|
||||
def die(self, remove=True):
|
||||
if remove:
|
||||
self.remove_node()
|
||||
return super().die()
|
||||
|
||||
|
||||
def state(name=None):
|
||||
def decorator(func, name=None):
|
||||
'''
|
||||
A state function should return either a state id, or a tuple (state_id, when)
|
||||
The default value for state_id is the current state id.
|
||||
The default value for when is the interval defined in the environment.
|
||||
'''
|
||||
if inspect.isgeneratorfunction(func):
|
||||
orig_func = func
|
||||
|
||||
@wraps(func)
|
||||
def func(self):
|
||||
while True:
|
||||
if not self._coroutine:
|
||||
self._coroutine = orig_func(self)
|
||||
try:
|
||||
n = next(self._coroutine)
|
||||
if n:
|
||||
return None, n
|
||||
return
|
||||
except StopIteration as ex:
|
||||
self._coroutine = None
|
||||
next_state = ex.value
|
||||
if next_state is not None:
|
||||
self.set_state(next_state)
|
||||
return next_state
|
||||
|
||||
func.id = name or func.__name__
|
||||
func.is_default = False
|
||||
return func
|
||||
|
||||
if callable(name):
|
||||
return decorator(name)
|
||||
else:
|
||||
return partial(decorator, name=name)
|
||||
|
||||
|
||||
def default_state(func):
|
||||
func.is_default = True
|
||||
return func
|
||||
|
||||
|
||||
class MetaFSM(MetaAgent):
|
||||
def __new__(mcls, name, bases, namespace):
|
||||
states = {}
|
||||
# Re-use states from inherited classes
|
||||
default_state = None
|
||||
for i in bases:
|
||||
if isinstance(i, MetaFSM):
|
||||
for state_id, state in i._states.items():
|
||||
if state.is_default:
|
||||
default_state = state
|
||||
states[state_id] = state
|
||||
|
||||
# Add new states
|
||||
for attr, func in namespace.items():
|
||||
if hasattr(func, 'id'):
|
||||
if func.is_default:
|
||||
default_state = func
|
||||
states[func.id] = func
|
||||
|
||||
namespace.update({
|
||||
'_default_state': default_state,
|
||||
'_states': states,
|
||||
})
|
||||
|
||||
return super(MetaFSM, mcls).__new__(mcls=mcls, name=name, bases=bases, namespace=namespace)
|
||||
|
||||
|
||||
class FSM(BaseAgent, metaclass=MetaFSM):
|
||||
def __init__(self, *args, **kwargs):
|
||||
super(FSM, self).__init__(*args, **kwargs)
|
||||
if not hasattr(self, 'state_id'):
|
||||
if not self._default_state:
|
||||
raise ValueError('No default state specified for {}'.format(self.unique_id))
|
||||
self.state_id = self._default_state.id
|
||||
|
||||
self._coroutine = None
|
||||
self.set_state(self.state_id)
|
||||
|
||||
def step(self):
|
||||
self.debug(f'Agent {self.unique_id} @ state {self.state_id}')
|
||||
default_interval = super().step()
|
||||
|
||||
next_state = self._states[self.state_id](self)
|
||||
|
||||
when = None
|
||||
try:
|
||||
next_state, *when = next_state
|
||||
if not when:
|
||||
when = None
|
||||
elif len(when) == 1:
|
||||
when = when[0]
|
||||
else:
|
||||
raise ValueError('Too many values returned. Only state (and time) allowed')
|
||||
except TypeError:
|
||||
pass
|
||||
|
||||
if next_state is not None:
|
||||
self.set_state(next_state)
|
||||
|
||||
return when or default_interval
|
||||
|
||||
def set_state(self, state, when=None):
|
||||
if hasattr(state, 'id'):
|
||||
state = state.id
|
||||
if state not in self._states:
|
||||
raise ValueError('{} is not a valid state'.format(state))
|
||||
self.state_id = state
|
||||
if when is not None:
|
||||
self.model.schedule.add(self, when=when)
|
||||
return state
|
||||
|
||||
def die(self):
|
||||
return self.dead, super().die()
|
||||
|
||||
@state
|
||||
def dead(self):
|
||||
return self.die()
|
||||
|
||||
|
||||
def prob(prob, random):
|
||||
'''
|
||||
"""
|
||||
A true/False uniform distribution with a given probability.
|
||||
To be used like this:
|
||||
|
||||
@@ -417,14 +265,13 @@ def prob(prob, random):
|
||||
if prob(0.3):
|
||||
do_something()
|
||||
|
||||
'''
|
||||
"""
|
||||
r = random.random()
|
||||
return r < prob
|
||||
|
||||
|
||||
def calculate_distribution(network_agents=None,
|
||||
agent_class=None):
|
||||
'''
|
||||
def calculate_distribution(network_agents=None, agent_class=None):
|
||||
"""
|
||||
Calculate the threshold values (thresholds for a uniform distribution)
|
||||
of an agent distribution given the weights of each agent type.
|
||||
|
||||
@@ -447,168 +294,54 @@ def calculate_distribution(network_agents=None,
|
||||
|
||||
In this example, 20% of the nodes will be marked as type
|
||||
'agent_class_1'.
|
||||
'''
|
||||
"""
|
||||
if network_agents:
|
||||
network_agents = [deepcopy(agent) for agent in network_agents if not hasattr(agent, 'id')]
|
||||
network_agents = [
|
||||
deepcopy(agent) for agent in network_agents if not hasattr(agent, "id")
|
||||
]
|
||||
elif agent_class:
|
||||
network_agents = [{'agent_class': agent_class}]
|
||||
network_agents = [{"agent_class": agent_class}]
|
||||
else:
|
||||
raise ValueError('Specify a distribution or a default agent type')
|
||||
raise ValueError("Specify a distribution or a default agent type")
|
||||
|
||||
# Fix missing weights and incompatible types
|
||||
for x in network_agents:
|
||||
x['weight'] = float(x.get('weight', 1))
|
||||
x["weight"] = float(x.get("weight", 1))
|
||||
|
||||
# Calculate the thresholds
|
||||
total = sum(x['weight'] for x in network_agents)
|
||||
total = sum(x["weight"] for x in network_agents)
|
||||
acc = 0
|
||||
for v in network_agents:
|
||||
if 'ids' in v:
|
||||
if "ids" in v:
|
||||
continue
|
||||
upper = acc + (v['weight']/total)
|
||||
v['threshold'] = [acc, upper]
|
||||
upper = acc + (v["weight"] / total)
|
||||
v["threshold"] = [acc, upper]
|
||||
acc = upper
|
||||
return network_agents
|
||||
|
||||
|
||||
def serialize_type(agent_class, known_modules=[], **kwargs):
|
||||
def _serialize_type(agent_class, known_modules=[], **kwargs):
|
||||
if isinstance(agent_class, str):
|
||||
return agent_class
|
||||
known_modules += ['soil.agents']
|
||||
return serialization.serialize(agent_class, known_modules=known_modules, **kwargs)[1] # Get the name of the class
|
||||
known_modules += ["soil.agents"]
|
||||
return serialization.serialize(agent_class, known_modules=known_modules, **kwargs)[
|
||||
1
|
||||
] # Get the name of the class
|
||||
|
||||
|
||||
def serialize_definition(network_agents, known_modules=[]):
|
||||
'''
|
||||
When serializing an agent distribution, remove the thresholds, in order
|
||||
to avoid cluttering the YAML definition file.
|
||||
'''
|
||||
d = deepcopy(list(network_agents))
|
||||
for v in d:
|
||||
if 'threshold' in v:
|
||||
del v['threshold']
|
||||
v['agent_class'] = serialize_type(v['agent_class'],
|
||||
known_modules=known_modules)
|
||||
return d
|
||||
|
||||
|
||||
def deserialize_type(agent_class, known_modules=[]):
|
||||
def _deserialize_type(agent_class, known_modules=[]):
|
||||
if not isinstance(agent_class, str):
|
||||
return agent_class
|
||||
known = known_modules + ['soil.agents', 'soil.agents.custom' ]
|
||||
known = known_modules + ["soil.agents", "soil.agents.custom"]
|
||||
agent_class = serialization.deserializer(agent_class, known_modules=known)
|
||||
return agent_class
|
||||
|
||||
|
||||
def deserialize_definition(ind, **kwargs):
|
||||
d = deepcopy(ind)
|
||||
for v in d:
|
||||
v['agent_class'] = deserialize_type(v['agent_class'], **kwargs)
|
||||
return d
|
||||
|
||||
|
||||
def _validate_states(states, topology):
|
||||
'''Validate states to avoid ignoring states during initialization'''
|
||||
states = states or []
|
||||
if isinstance(states, dict):
|
||||
for x in states:
|
||||
assert x in topology.nodes
|
||||
else:
|
||||
assert len(states) <= len(topology)
|
||||
return states
|
||||
|
||||
|
||||
def _convert_agent_classs(ind, to_string=False, **kwargs):
|
||||
'''Convenience method to allow specifying agents by class or class name.'''
|
||||
if to_string:
|
||||
return serialize_definition(ind, **kwargs)
|
||||
return deserialize_definition(ind, **kwargs)
|
||||
|
||||
|
||||
# def _agent_from_definition(definition, random, value=-1, unique_id=None):
|
||||
# """Used in the initialization of agents given an agent distribution."""
|
||||
# if value < 0:
|
||||
# value = random.random()
|
||||
# for d in sorted(definition, key=lambda x: x.get('threshold')):
|
||||
# threshold = d.get('threshold', (-1, -1))
|
||||
# # Check if the definition matches by id (first) or by threshold
|
||||
# if (unique_id is not None and unique_id in d.get('ids', [])) or \
|
||||
# (value >= threshold[0] and value < threshold[1]):
|
||||
# state = {}
|
||||
# if 'state' in d:
|
||||
# state = deepcopy(d['state'])
|
||||
# return d['agent_class'], state
|
||||
|
||||
# raise Exception('Definition for value {} not found in: {}'.format(value, definition))
|
||||
|
||||
|
||||
# def _definition_to_dict(definition, random, size=None, default_state=None):
|
||||
# state = default_state or {}
|
||||
# agents = {}
|
||||
# remaining = {}
|
||||
# if size:
|
||||
# for ix in range(size):
|
||||
# remaining[ix] = copy(state)
|
||||
# else:
|
||||
# remaining = defaultdict(lambda x: copy(state))
|
||||
|
||||
# distro = sorted([item for item in definition if 'weight' in item])
|
||||
|
||||
# id = 0
|
||||
|
||||
# def init_agent(item, id=ix):
|
||||
# while id in agents:
|
||||
# id += 1
|
||||
|
||||
# agent = remaining[id]
|
||||
# agent['state'].update(copy(item.get('state', {})))
|
||||
# agents[agent.unique_id] = agent
|
||||
# del remaining[id]
|
||||
# return agent
|
||||
|
||||
# for item in definition:
|
||||
# if 'ids' in item:
|
||||
# ids = item['ids']
|
||||
# del item['ids']
|
||||
# for id in ids:
|
||||
# agent = init_agent(item, id)
|
||||
|
||||
# for item in definition:
|
||||
# if 'number' in item:
|
||||
# times = item['number']
|
||||
# del item['number']
|
||||
# for times in range(times):
|
||||
# if size:
|
||||
# ix = random.choice(remaining.keys())
|
||||
# agent = init_agent(item, id)
|
||||
# else:
|
||||
# agent = init_agent(item)
|
||||
# if not size:
|
||||
# return agents
|
||||
|
||||
# if len(remaining) < 0:
|
||||
# raise Exception('Invalid definition. Too many agents to add')
|
||||
|
||||
|
||||
# total_weight = float(sum(s['weight'] for s in distro))
|
||||
# unit = size / total_weight
|
||||
|
||||
# for item in distro:
|
||||
# times = unit * item['weight']
|
||||
# del item['weight']
|
||||
# for times in range(times):
|
||||
# ix = random.choice(remaining.keys())
|
||||
# agent = init_agent(item, id)
|
||||
# return agents
|
||||
|
||||
|
||||
class AgentView(Mapping, Set):
|
||||
"""A lazy-loaded list of agents.
|
||||
"""
|
||||
"""A lazy-loaded list of agents."""
|
||||
|
||||
__slots__ = ("_agents",)
|
||||
|
||||
|
||||
def __init__(self, agents):
|
||||
self._agents = agents
|
||||
|
||||
@@ -651,11 +384,20 @@ class AgentView(Mapping, Set):
|
||||
return f"{self.__class__.__name__}({self})"
|
||||
|
||||
|
||||
def filter_agents(agents, *id_args, unique_id=None, state_id=None, agent_class=None, ignore=None, state=None,
|
||||
limit=None, **kwargs):
|
||||
'''
|
||||
def filter_agents(
|
||||
agents,
|
||||
*id_args,
|
||||
unique_id=None,
|
||||
state_id=None,
|
||||
agent_class=None,
|
||||
ignore=None,
|
||||
state=None,
|
||||
limit=None,
|
||||
**kwargs,
|
||||
):
|
||||
"""
|
||||
Filter agents given as a dict, by the criteria given as arguments (e.g., certain type or state id).
|
||||
'''
|
||||
"""
|
||||
assert isinstance(agents, dict)
|
||||
|
||||
ids = []
|
||||
@@ -678,7 +420,7 @@ def filter_agents(agents, *id_args, unique_id=None, state_id=None, agent_class=N
|
||||
state_id = tuple([state_id])
|
||||
|
||||
if agent_class is not None:
|
||||
agent_class = deserialize_type(agent_class)
|
||||
agent_class = _deserialize_type(agent_class)
|
||||
try:
|
||||
agent_class = tuple(agent_class)
|
||||
except TypeError:
|
||||
@@ -688,7 +430,7 @@ def filter_agents(agents, *id_args, unique_id=None, state_id=None, agent_class=N
|
||||
f = filter(lambda x: x not in ignore, f)
|
||||
|
||||
if state_id is not None:
|
||||
f = filter(lambda agent: agent.get('state_id', None) in state_id, f)
|
||||
f = filter(lambda agent: agent.get("state_id", None) in state_id, f)
|
||||
|
||||
if agent_class is not None:
|
||||
f = filter(lambda agent: isinstance(agent, agent_class), f)
|
||||
@@ -697,7 +439,7 @@ def filter_agents(agents, *id_args, unique_id=None, state_id=None, agent_class=N
|
||||
state.update(kwargs)
|
||||
|
||||
for k, v in state.items():
|
||||
f = filter(lambda agent: agent.state.get(k, None) == v, f)
|
||||
f = filter(lambda agent: getattr(agent, k, None) == v, f)
|
||||
|
||||
if limit is not None:
|
||||
f = islice(f, limit)
|
||||
@@ -705,123 +447,135 @@ def filter_agents(agents, *id_args, unique_id=None, state_id=None, agent_class=N
|
||||
yield from f
|
||||
|
||||
|
||||
def from_config(cfg: config.AgentConfig, random, topologies: Dict[str, nx.Graph] = None) -> List[Dict[str, Any]]:
|
||||
'''
|
||||
def from_config(
|
||||
cfg: config.AgentConfig, random, topology: nx.Graph = None
|
||||
) -> List[Dict[str, Any]]:
|
||||
"""
|
||||
This function turns an agentconfig into a list of individual "agent specifications", which are just a dictionary
|
||||
with the parameters that the environment will use to construct each agent.
|
||||
|
||||
This function does NOT return a list of agents, mostly because some attributes to the agent are not known at the
|
||||
time of calling this function, such as `unique_id`.
|
||||
'''
|
||||
"""
|
||||
default = cfg or config.AgentConfig()
|
||||
if not isinstance(cfg, config.AgentConfig):
|
||||
cfg = config.AgentConfig(**cfg)
|
||||
return _agents_from_config(cfg, topologies=topologies, random=random)
|
||||
|
||||
|
||||
def _agents_from_config(cfg: config.AgentConfig,
|
||||
topologies: Dict[str, nx.Graph],
|
||||
random) -> List[Dict[str, Any]]:
|
||||
if cfg and not isinstance(cfg, config.AgentConfig):
|
||||
cfg = config.AgentConfig(**cfg)
|
||||
|
||||
agents = []
|
||||
|
||||
assigned = defaultdict(int)
|
||||
assigned_total = 0
|
||||
assigned_network = 0
|
||||
|
||||
if cfg.fixed is not None:
|
||||
agents, counts = _from_fixed(cfg.fixed, topology=cfg.topology, default=cfg)
|
||||
assigned.update(counts)
|
||||
agents, assigned_total, assigned_network = _from_fixed(
|
||||
cfg.fixed, topology=cfg.topology, default=cfg
|
||||
)
|
||||
|
||||
n = cfg.n
|
||||
|
||||
if cfg.distribution:
|
||||
topo_size = {top: len(topologies[top]) for top in topologies}
|
||||
topo_size = len(topology) if topology else 0
|
||||
|
||||
grouped = defaultdict(list)
|
||||
networked = []
|
||||
total = []
|
||||
|
||||
for d in cfg.distribution:
|
||||
if d.strategy == config.Strategy.topology:
|
||||
topology = d.topology if ('topology' in d.__fields_set__) else cfg.topology
|
||||
if not topology:
|
||||
raise ValueError('The "topology" strategy only works if the topology parameter is specified')
|
||||
if topology not in topo_size:
|
||||
raise ValueError(f'Unknown topology selected: { topology }. Make sure the topology has been defined')
|
||||
topo = d.topology if ("topology" in d.__fields_set__) else cfg.topology
|
||||
if not topo:
|
||||
raise ValueError(
|
||||
'The "topology" strategy only works if the topology parameter is set to True'
|
||||
)
|
||||
if not topo_size:
|
||||
raise ValueError(
|
||||
f"Topology does not have enough free nodes to assign one to the agent"
|
||||
)
|
||||
|
||||
grouped[topology].append(d)
|
||||
networked.append(d)
|
||||
|
||||
if d.strategy == config.Strategy.total:
|
||||
if not cfg.n:
|
||||
raise ValueError('Cannot use the "total" strategy without providing the total number of agents')
|
||||
raise ValueError(
|
||||
'Cannot use the "total" strategy without providing the total number of agents'
|
||||
)
|
||||
total.append(d)
|
||||
|
||||
|
||||
for (topo, distro) in grouped.items():
|
||||
if not topologies or topo not in topo_size:
|
||||
raise ValueError(
|
||||
'You need to specify a target number of agents for the distribution \
|
||||
or a configuration with a topology, along with a dictionary with \
|
||||
all the available topologies')
|
||||
n = len(topologies[topo])
|
||||
target = topo_size[topo] - assigned[topo]
|
||||
new_agents = _from_distro(cfg.distribution, target,
|
||||
topology=topo,
|
||||
default=cfg,
|
||||
random=random)
|
||||
assigned[topo] += len(new_agents)
|
||||
if networked:
|
||||
new_agents = _from_distro(
|
||||
networked,
|
||||
n=topo_size - assigned_network,
|
||||
topology=topo,
|
||||
default=cfg,
|
||||
random=random,
|
||||
)
|
||||
assigned_total += len(new_agents)
|
||||
assigned_network += len(new_agents)
|
||||
agents += new_agents
|
||||
|
||||
if total:
|
||||
remaining = n - sum(assigned.values())
|
||||
agents += _from_distro(total, remaining,
|
||||
topology='', # DO NOT assign to any topology
|
||||
default=cfg,
|
||||
random=random)
|
||||
remaining = n - assigned_total
|
||||
agents += _from_distro(total, n=remaining, default=cfg, random=random)
|
||||
|
||||
|
||||
if sum(assigned.values()) != sum(topo_size.values()):
|
||||
utils.logger.warn(f'The total number of agents does not match the total number of nodes in '
|
||||
'every topology. This may be due to a definition error: assigned: '
|
||||
f'{ assigned } total sizes: { topo_size }')
|
||||
if assigned_network < topo_size:
|
||||
utils.logger.warn(
|
||||
f"The total number of agents does not match the total number of nodes in "
|
||||
"every topology. This may be due to a definition error: assigned: "
|
||||
f"{ assigned } total size: { topo_size }"
|
||||
)
|
||||
|
||||
return agents
|
||||
|
||||
|
||||
def _from_fixed(lst: List[config.FixedAgentConfig], topology: str, default: config.SingleAgentConfig) -> List[Dict[str, Any]]:
|
||||
def _from_fixed(
|
||||
lst: List[config.FixedAgentConfig],
|
||||
topology: bool,
|
||||
default: config.SingleAgentConfig,
|
||||
) -> List[Dict[str, Any]]:
|
||||
agents = []
|
||||
|
||||
counts = {}
|
||||
counts_total = 0
|
||||
counts_network = 0
|
||||
|
||||
for fixed in lst:
|
||||
agent = {}
|
||||
if default:
|
||||
agent = default.state.copy()
|
||||
agent.update(fixed.state)
|
||||
cls = serialization.deserialize(fixed.agent_class or (default and default.agent_class))
|
||||
agent['agent_class'] = cls
|
||||
topo = fixed.topology if ('topology' in fixed.__fields_set__) else topology or default.topology
|
||||
cls = serialization.deserialize(
|
||||
fixed.agent_class or (default and default.agent_class)
|
||||
)
|
||||
agent["agent_class"] = cls
|
||||
topo = (
|
||||
fixed.topology
|
||||
if ("topology" in fixed.__fields_set__)
|
||||
else topology or default.topology
|
||||
)
|
||||
|
||||
if topo:
|
||||
agent['topology'] = topo
|
||||
agent["topology"] = True
|
||||
counts_network += 1
|
||||
if not fixed.hidden:
|
||||
counts[topo] = counts.get(topo, 0) + 1
|
||||
counts_total += 1
|
||||
agents.append(agent)
|
||||
|
||||
return agents, counts
|
||||
return agents, counts_total, counts_network
|
||||
|
||||
|
||||
def _from_distro(distro: List[config.AgentDistro],
|
||||
n: int,
|
||||
topology: str,
|
||||
default: config.SingleAgentConfig,
|
||||
random) -> List[Dict[str, Any]]:
|
||||
def _from_distro(
|
||||
distro: List[config.AgentDistro],
|
||||
n: int,
|
||||
topology: str,
|
||||
default: config.SingleAgentConfig,
|
||||
random,
|
||||
) -> List[Dict[str, Any]]:
|
||||
|
||||
agents = []
|
||||
|
||||
if n is None:
|
||||
if any(lambda dist: dist.n is None, distro):
|
||||
raise ValueError('You must provide a total number of agents, or the number of each type')
|
||||
raise ValueError(
|
||||
"You must provide a total number of agents, or the number of each type"
|
||||
)
|
||||
n = sum(dist.n for dist in distro)
|
||||
|
||||
weights = list(dist.weight if dist.weight is not None else 1 for dist in distro)
|
||||
@@ -834,35 +588,48 @@ def _from_distro(distro: List[config.AgentDistro],
|
||||
# So instead we calculate our own distribution to make sure the actual ratios are close to what we would expect
|
||||
|
||||
# Calculate how many times each has to appear
|
||||
indices = list(chain.from_iterable([idx] * int(n*chunk) for (idx, n) in enumerate(norm)))
|
||||
indices = list(
|
||||
chain.from_iterable([idx] * int(n * chunk) for (idx, n) in enumerate(norm))
|
||||
)
|
||||
|
||||
# Complete with random agents following the original weight distribution
|
||||
if len(indices) < n:
|
||||
indices += random.choices(list(range(len(distro))), weights=[d.weight for d in distro], k=n-len(indices))
|
||||
indices += random.choices(
|
||||
list(range(len(distro))),
|
||||
weights=[d.weight for d in distro],
|
||||
k=n - len(indices),
|
||||
)
|
||||
|
||||
# Deserialize classes for efficiency
|
||||
classes = list(serialization.deserialize(i.agent_class or default.agent_class) for i in distro)
|
||||
classes = list(
|
||||
serialization.deserialize(i.agent_class or default.agent_class) for i in distro
|
||||
)
|
||||
|
||||
# Add them in random order
|
||||
random.shuffle(indices)
|
||||
|
||||
|
||||
for idx in indices:
|
||||
d = distro[idx]
|
||||
agent = d.state.copy()
|
||||
cls = classes[idx]
|
||||
agent['agent_class'] = cls
|
||||
agent["agent_class"] = cls
|
||||
if default:
|
||||
agent.update(default.state)
|
||||
# agent = cls(unique_id=agent_id, model=env, **state)
|
||||
topology = d.topology if ('topology' in d.__fields_set__) else topology or default.topology
|
||||
topology = (
|
||||
d.topology
|
||||
if ("topology" in d.__fields_set__)
|
||||
else topology or default.topology
|
||||
)
|
||||
if topology:
|
||||
agent['topology'] = topology
|
||||
agent["topology"] = topology
|
||||
agents.append(agent)
|
||||
|
||||
return agents
|
||||
|
||||
|
||||
from .network_agents import *
|
||||
from .fsm import *
|
||||
from .evented import *
|
||||
from .BassModel import *
|
||||
from .BigMarketModel import *
|
||||
from .IndependentCascadeModel import *
|
||||
@@ -876,4 +643,5 @@ try:
|
||||
from .Geo import Geo
|
||||
except ImportError:
|
||||
import sys
|
||||
print('Could not load the Geo Agent, scipy is not installed', file=sys.stderr)
|
||||
|
||||
print("Could not load the Geo Agent, scipy is not installed", file=sys.stderr)
|
||||
|
||||
57
soil/agents/evented.py
Normal file
57
soil/agents/evented.py
Normal file
@@ -0,0 +1,57 @@
|
||||
from . import BaseAgent
|
||||
from ..events import Message, Tell, Ask, Reply, TimedOut
|
||||
from ..time import Cond
|
||||
from functools import partial
|
||||
from collections import deque
|
||||
|
||||
|
||||
class Evented(BaseAgent):
|
||||
def __init__(self, *args, **kwargs):
|
||||
super().__init__(*args, **kwargs)
|
||||
self._inbox = deque()
|
||||
self._received = 0
|
||||
self._processed = 0
|
||||
|
||||
|
||||
def on_receive(self, *args, **kwargs):
|
||||
pass
|
||||
|
||||
def received(self, expiration=None, timeout=None):
|
||||
current = self._received
|
||||
if expiration is None:
|
||||
expiration = float('inf') if timeout is None else self.now + timeout
|
||||
|
||||
if expiration < self.now:
|
||||
raise ValueError("Invalid expiration time")
|
||||
|
||||
def ready(agent):
|
||||
return agent._received > current or agent.now >= expiration
|
||||
|
||||
def value(agent):
|
||||
if agent.now > expiration:
|
||||
raise TimedOut("No message received")
|
||||
|
||||
c = Cond(func=ready, return_func=value)
|
||||
c._checked = True
|
||||
return c
|
||||
|
||||
def tell(self, msg, sender):
|
||||
self._received += 1
|
||||
self._inbox.append(Tell(payload=msg, sender=sender))
|
||||
|
||||
def ask(self, msg, timeout=None):
|
||||
self._received += 1
|
||||
ask = Ask(payload=msg)
|
||||
self._inbox.append(ask)
|
||||
expiration = float('inf') if timeout is None else self.now + timeout
|
||||
return ask.replied(expiration=expiration)
|
||||
|
||||
def check_messages(self):
|
||||
while self._inbox:
|
||||
msg = self._inbox.popleft()
|
||||
self._processed += 1
|
||||
if msg.expired(self.now):
|
||||
continue
|
||||
reply = self.on_receive(msg.payload, sender=msg.sender)
|
||||
if isinstance(msg, Ask):
|
||||
msg.reply = reply
|
||||
142
soil/agents/fsm.py
Normal file
142
soil/agents/fsm.py
Normal file
@@ -0,0 +1,142 @@
|
||||
from . import MetaAgent, BaseAgent
|
||||
|
||||
from functools import partial, wraps
|
||||
import inspect
|
||||
|
||||
|
||||
def state(name=None):
|
||||
def decorator(func, name=None):
|
||||
"""
|
||||
A state function should return either a state id, or a tuple (state_id, when)
|
||||
The default value for state_id is the current state id.
|
||||
The default value for when is the interval defined in the environment.
|
||||
"""
|
||||
if inspect.isgeneratorfunction(func):
|
||||
orig_func = func
|
||||
|
||||
@wraps(func)
|
||||
def func(self):
|
||||
while True:
|
||||
if not self._coroutine:
|
||||
self._coroutine = orig_func(self)
|
||||
|
||||
try:
|
||||
if self._last_except:
|
||||
n = self._coroutine.throw(self._last_except)
|
||||
else:
|
||||
n = self._coroutine.send(self._last_return)
|
||||
if n:
|
||||
return None, n
|
||||
return n
|
||||
except StopIteration as ex:
|
||||
self._coroutine = None
|
||||
next_state = ex.value
|
||||
if next_state is not None:
|
||||
self._set_state(next_state)
|
||||
return next_state
|
||||
finally:
|
||||
self._last_return = None
|
||||
self._last_except = None
|
||||
|
||||
|
||||
|
||||
func.id = name or func.__name__
|
||||
func.is_default = False
|
||||
return func
|
||||
|
||||
if callable(name):
|
||||
return decorator(name)
|
||||
else:
|
||||
return partial(decorator, name=name)
|
||||
|
||||
|
||||
def default_state(func):
|
||||
func.is_default = True
|
||||
return func
|
||||
|
||||
|
||||
class MetaFSM(MetaAgent):
|
||||
def __new__(mcls, name, bases, namespace):
|
||||
states = {}
|
||||
# Re-use states from inherited classes
|
||||
default_state = None
|
||||
for i in bases:
|
||||
if isinstance(i, MetaFSM):
|
||||
for state_id, state in i._states.items():
|
||||
if state.is_default:
|
||||
default_state = state
|
||||
states[state_id] = state
|
||||
|
||||
# Add new states
|
||||
for attr, func in namespace.items():
|
||||
if hasattr(func, "id"):
|
||||
if func.is_default:
|
||||
default_state = func
|
||||
states[func.id] = func
|
||||
|
||||
namespace.update(
|
||||
{
|
||||
"_default_state": default_state,
|
||||
"_states": states,
|
||||
}
|
||||
)
|
||||
|
||||
return super(MetaFSM, mcls).__new__(
|
||||
mcls=mcls, name=name, bases=bases, namespace=namespace
|
||||
)
|
||||
|
||||
|
||||
class FSM(BaseAgent, metaclass=MetaFSM):
|
||||
def __init__(self, **kwargs):
|
||||
super(FSM, self).__init__(**kwargs)
|
||||
if not hasattr(self, "state_id"):
|
||||
if not self._default_state:
|
||||
raise ValueError(
|
||||
"No default state specified for {}".format(self.unique_id)
|
||||
)
|
||||
self.state_id = self._default_state.id
|
||||
|
||||
self._coroutine = None
|
||||
self._set_state(self.state_id)
|
||||
|
||||
def step(self):
|
||||
self.debug(f"Agent {self.unique_id} @ state {self.state_id}")
|
||||
default_interval = super().step()
|
||||
|
||||
next_state = self._states[self.state_id](self)
|
||||
|
||||
when = None
|
||||
try:
|
||||
next_state, *when = next_state
|
||||
if not when:
|
||||
when = None
|
||||
elif len(when) == 1:
|
||||
when = when[0]
|
||||
else:
|
||||
raise ValueError(
|
||||
"Too many values returned. Only state (and time) allowed"
|
||||
)
|
||||
except TypeError:
|
||||
pass
|
||||
|
||||
if next_state is not None:
|
||||
self._set_state(next_state)
|
||||
|
||||
return when or default_interval
|
||||
|
||||
def _set_state(self, state, when=None):
|
||||
if hasattr(state, "id"):
|
||||
state = state.id
|
||||
if state not in self._states:
|
||||
raise ValueError("{} is not a valid state".format(state))
|
||||
self.state_id = state
|
||||
if when is not None:
|
||||
self.model.schedule.add(self, when=when)
|
||||
return state
|
||||
|
||||
def die(self):
|
||||
return self.dead, super().die()
|
||||
|
||||
@state
|
||||
def dead(self):
|
||||
return self.die()
|
||||
82
soil/agents/network_agents.py
Normal file
82
soil/agents/network_agents.py
Normal file
@@ -0,0 +1,82 @@
|
||||
from . import BaseAgent
|
||||
|
||||
|
||||
class NetworkAgent(BaseAgent):
|
||||
def __init__(self, *args, topology, node_id, **kwargs):
|
||||
super().__init__(*args, **kwargs)
|
||||
|
||||
assert topology is not None
|
||||
assert node_id is not None
|
||||
self.G = topology
|
||||
assert self.G
|
||||
self.node_id = node_id
|
||||
|
||||
def count_neighbors(self, state_id=None, **kwargs):
|
||||
return len(self.get_neighbors(state_id=state_id, **kwargs))
|
||||
|
||||
def get_neighbors(self, **kwargs):
|
||||
return list(self.iter_agents(limit_neighbors=True, **kwargs))
|
||||
|
||||
@property
|
||||
def node(self):
|
||||
return self.G.nodes[self.node_id]
|
||||
|
||||
def iter_agents(self, unique_id=None, *, limit_neighbors=False, **kwargs):
|
||||
unique_ids = None
|
||||
if isinstance(unique_id, list):
|
||||
unique_ids = set(unique_id)
|
||||
elif unique_id is not None:
|
||||
unique_ids = set(
|
||||
[
|
||||
unique_id,
|
||||
]
|
||||
)
|
||||
|
||||
if limit_neighbors:
|
||||
neighbor_ids = set()
|
||||
for node_id in self.G.neighbors(self.node_id):
|
||||
if self.G.nodes[node_id].get("agent") is not None:
|
||||
neighbor_ids.add(node_id)
|
||||
if unique_ids:
|
||||
unique_ids = unique_ids & neighbor_ids
|
||||
else:
|
||||
unique_ids = neighbor_ids
|
||||
if not unique_ids:
|
||||
return
|
||||
unique_ids = list(unique_ids)
|
||||
yield from super().iter_agents(unique_id=unique_ids, **kwargs)
|
||||
|
||||
def subgraph(self, center=True, **kwargs):
|
||||
include = [self] if center else []
|
||||
G = self.G.subgraph(
|
||||
n.node_id for n in list(self.get_agents(**kwargs) + include)
|
||||
)
|
||||
return G
|
||||
|
||||
def remove_node(self):
|
||||
print(f"Removing node for {self.unique_id}: {self.node_id}")
|
||||
self.G.remove_node(self.node_id)
|
||||
self.node_id = None
|
||||
|
||||
def add_edge(self, other, edge_attr_dict=None, *edge_attrs):
|
||||
if self.node_id not in self.G.nodes(data=False):
|
||||
raise ValueError(
|
||||
"{} not in list of existing agents in the network".format(
|
||||
self.unique_id
|
||||
)
|
||||
)
|
||||
if other.node_id not in self.G.nodes(data=False):
|
||||
raise ValueError(
|
||||
"{} not in list of existing agents in the network".format(other)
|
||||
)
|
||||
|
||||
self.G.add_edge(
|
||||
self.node_id, other.node_id, edge_attr_dict=edge_attr_dict, *edge_attrs
|
||||
)
|
||||
|
||||
def die(self, remove=True):
|
||||
if not self.alive:
|
||||
return None
|
||||
if remove:
|
||||
self.remove_node()
|
||||
return super().die()
|
||||
176
soil/config.py
176
soil/config.py
@@ -19,6 +19,7 @@ import networkx as nx
|
||||
# Could use TypeAlias in python >= 3.10
|
||||
nodeId = int
|
||||
|
||||
|
||||
class Node(BaseModel):
|
||||
id: nodeId
|
||||
state: Optional[Dict[str, Any]] = {}
|
||||
@@ -43,7 +44,7 @@ class NetParams(BaseModel, extra=Extra.allow):
|
||||
|
||||
class NetConfig(BaseModel):
|
||||
params: Optional[NetParams]
|
||||
topology: Optional[Union[Topology, nx.Graph]]
|
||||
fixed: Optional[Union[Topology, nx.Graph]]
|
||||
path: Optional[str]
|
||||
|
||||
class Config:
|
||||
@@ -54,14 +55,15 @@ class NetConfig(BaseModel):
|
||||
return NetConfig(topology=None, params=None)
|
||||
|
||||
@root_validator
|
||||
def validate_all(cls, values):
|
||||
if 'params' not in values and 'topology' not in values:
|
||||
raise ValueError('You must specify either a topology or the parameters to generate a graph')
|
||||
def validate_all(cls, values):
|
||||
if "params" not in values and "topology" not in values:
|
||||
raise ValueError(
|
||||
"You must specify either a topology or the parameters to generate a graph"
|
||||
)
|
||||
return values
|
||||
|
||||
|
||||
class EnvConfig(BaseModel):
|
||||
|
||||
@staticmethod
|
||||
def default():
|
||||
return EnvConfig()
|
||||
@@ -70,7 +72,7 @@ class EnvConfig(BaseModel):
|
||||
class SingleAgentConfig(BaseModel):
|
||||
agent_class: Optional[Union[Type, str]] = None
|
||||
unique_id: Optional[int] = None
|
||||
topology: Optional[str] = None
|
||||
topology: Optional[bool] = False
|
||||
node_id: Optional[Union[int, str]] = None
|
||||
state: Optional[Dict[str, Any]] = {}
|
||||
|
||||
@@ -80,9 +82,11 @@ class FixedAgentConfig(SingleAgentConfig):
|
||||
hidden: Optional[bool] = False # Do not count this agent towards total agent count
|
||||
|
||||
@root_validator
|
||||
def validate_all(cls, values):
|
||||
if values.get('agent_id', None) is not None and values.get('n', 1) > 1:
|
||||
raise ValueError(f"An agent_id can only be provided when there is only one agent ({values.get('n')} given)")
|
||||
def validate_all(cls, values):
|
||||
if values.get("unique_id", None) is not None and values.get("n", 1) > 1:
|
||||
raise ValueError(
|
||||
f"An unique_id can only be provided when there is only one agent ({values.get('n')} given)"
|
||||
)
|
||||
return values
|
||||
|
||||
|
||||
@@ -91,8 +95,8 @@ class OverrideAgentConfig(FixedAgentConfig):
|
||||
|
||||
|
||||
class Strategy(Enum):
|
||||
topology = 'topology'
|
||||
total = 'total'
|
||||
topology = "topology"
|
||||
total = "total"
|
||||
|
||||
|
||||
class AgentDistro(SingleAgentConfig):
|
||||
@@ -102,7 +106,6 @@ class AgentDistro(SingleAgentConfig):
|
||||
|
||||
class AgentConfig(SingleAgentConfig):
|
||||
n: Optional[int] = None
|
||||
topology: Optional[str]
|
||||
distribution: Optional[List[AgentDistro]] = None
|
||||
fixed: Optional[List[FixedAgentConfig]] = None
|
||||
override: Optional[List[OverrideAgentConfig]] = None
|
||||
@@ -112,16 +115,20 @@ class AgentConfig(SingleAgentConfig):
|
||||
return AgentConfig()
|
||||
|
||||
@root_validator
|
||||
def validate_all(cls, values):
|
||||
if 'distribution' in values and ('n' not in values and 'topology' not in values):
|
||||
raise ValueError("You need to provide the number of agents or a topology to extract the value from.")
|
||||
def validate_all(cls, values):
|
||||
if "distribution" in values and (
|
||||
"n" not in values and "topology" not in values
|
||||
):
|
||||
raise ValueError(
|
||||
"You need to provide the number of agents or a topology to extract the value from."
|
||||
)
|
||||
return values
|
||||
|
||||
|
||||
class Config(BaseModel, extra=Extra.allow):
|
||||
version: Optional[str] = '1'
|
||||
version: Optional[str] = "1"
|
||||
|
||||
name: str = 'Unnamed Simulation'
|
||||
name: str = "Unnamed Simulation"
|
||||
description: Optional[str] = None
|
||||
group: str = None
|
||||
dir_path: Optional[str] = None
|
||||
@@ -141,45 +148,48 @@ class Config(BaseModel, extra=Extra.allow):
|
||||
def from_raw(cls, cfg):
|
||||
if isinstance(cfg, Config):
|
||||
return cfg
|
||||
if cfg.get('version', '1') == '1' and any(k in cfg for k in ['agents', 'agent_class', 'topology', 'environment_class']):
|
||||
if cfg.get("version", "1") == "1" and any(
|
||||
k in cfg for k in ["agents", "agent_class", "topology", "environment_class"]
|
||||
):
|
||||
return convert_old(cfg)
|
||||
return Config(**cfg)
|
||||
|
||||
|
||||
def convert_old(old, strict=True):
|
||||
'''
|
||||
"""
|
||||
Try to convert old style configs into the new format.
|
||||
|
||||
This is still a work in progress and might not work in many cases.
|
||||
'''
|
||||
"""
|
||||
|
||||
utils.logger.warning('The old configuration format is deprecated. The converted file MAY NOT yield the right results')
|
||||
utils.logger.warning(
|
||||
"The old configuration format is deprecated. The converted file MAY NOT yield the right results"
|
||||
)
|
||||
|
||||
new = old.copy()
|
||||
|
||||
network = {}
|
||||
|
||||
if 'topology' in old:
|
||||
del new['topology']
|
||||
network['topology'] = old['topology']
|
||||
if "topology" in old:
|
||||
del new["topology"]
|
||||
network["topology"] = old["topology"]
|
||||
|
||||
if 'network_params' in old and old['network_params']:
|
||||
del new['network_params']
|
||||
for (k, v) in old['network_params'].items():
|
||||
if k == 'path':
|
||||
network['path'] = v
|
||||
if "network_params" in old and old["network_params"]:
|
||||
del new["network_params"]
|
||||
for (k, v) in old["network_params"].items():
|
||||
if k == "path":
|
||||
network["path"] = v
|
||||
else:
|
||||
network.setdefault('params', {})[k] = v
|
||||
network.setdefault("params", {})[k] = v
|
||||
|
||||
topologies = {}
|
||||
topology = None
|
||||
if network:
|
||||
topologies['default'] = network
|
||||
topology = network
|
||||
|
||||
|
||||
agents = {'fixed': [], 'distribution': []}
|
||||
agents = {"fixed": [], "distribution": []}
|
||||
|
||||
def updated_agent(agent):
|
||||
'''Convert an agent definition'''
|
||||
"""Convert an agent definition"""
|
||||
newagent = dict(agent)
|
||||
return newagent
|
||||
|
||||
@@ -187,80 +197,74 @@ def convert_old(old, strict=True):
|
||||
fixed = []
|
||||
override = []
|
||||
|
||||
if 'environment_agents' in new:
|
||||
if "environment_agents" in new:
|
||||
|
||||
for agent in new['environment_agents']:
|
||||
agent.setdefault('state', {})['group'] = 'environment'
|
||||
if 'agent_id' in agent:
|
||||
agent['state']['name'] = agent['agent_id']
|
||||
del agent['agent_id']
|
||||
agent['hidden'] = True
|
||||
agent['topology'] = None
|
||||
for agent in new["environment_agents"]:
|
||||
agent.setdefault("state", {})["group"] = "environment"
|
||||
if "agent_id" in agent:
|
||||
agent["state"]["name"] = agent["agent_id"]
|
||||
del agent["agent_id"]
|
||||
agent["hidden"] = True
|
||||
agent["topology"] = False
|
||||
fixed.append(updated_agent(agent))
|
||||
del new['environment_agents']
|
||||
del new["environment_agents"]
|
||||
|
||||
if "agent_class" in old:
|
||||
del new["agent_class"]
|
||||
agents["agent_class"] = old["agent_class"]
|
||||
|
||||
if 'agent_class' in old:
|
||||
del new['agent_class']
|
||||
agents['agent_class'] = old['agent_class']
|
||||
if "default_state" in old:
|
||||
del new["default_state"]
|
||||
agents["state"] = old["default_state"]
|
||||
|
||||
if 'default_state' in old:
|
||||
del new['default_state']
|
||||
agents['state'] = old['default_state']
|
||||
if "network_agents" in old:
|
||||
agents["topology"] = True
|
||||
|
||||
if 'network_agents' in old:
|
||||
agents['topology'] = 'default'
|
||||
agents.setdefault("state", {})["group"] = "network"
|
||||
|
||||
agents.setdefault('state', {})['group'] = 'network'
|
||||
|
||||
for agent in new['network_agents']:
|
||||
for agent in new["network_agents"]:
|
||||
agent = updated_agent(agent)
|
||||
if 'agent_id' in agent:
|
||||
agent['state']['name'] = agent['agent_id']
|
||||
del agent['agent_id']
|
||||
if "agent_id" in agent:
|
||||
agent["state"]["name"] = agent["agent_id"]
|
||||
del agent["agent_id"]
|
||||
fixed.append(agent)
|
||||
else:
|
||||
by_weight.append(agent)
|
||||
del new['network_agents']
|
||||
|
||||
if 'agent_class' in old and (not fixed and not by_weight):
|
||||
agents['topology'] = 'default'
|
||||
by_weight = [{'agent_class': old['agent_class'], 'weight': 1}]
|
||||
del new["network_agents"]
|
||||
|
||||
if "agent_class" in old and (not fixed and not by_weight):
|
||||
agents["topology"] = True
|
||||
by_weight = [{"agent_class": old["agent_class"], "weight": 1}]
|
||||
|
||||
# TODO: translate states properly
|
||||
if 'states' in old:
|
||||
del new['states']
|
||||
states = old['states']
|
||||
if "states" in old:
|
||||
del new["states"]
|
||||
states = old["states"]
|
||||
if isinstance(states, dict):
|
||||
states = states.items()
|
||||
else:
|
||||
states = enumerate(states)
|
||||
for (k, v) in states:
|
||||
override.append({'filter': {'node_id': k},
|
||||
'state': v})
|
||||
|
||||
agents['override'] = override
|
||||
agents['fixed'] = fixed
|
||||
agents['distribution'] = by_weight
|
||||
override.append({"filter": {"node_id": k}, "state": v})
|
||||
|
||||
agents["override"] = override
|
||||
agents["fixed"] = fixed
|
||||
agents["distribution"] = by_weight
|
||||
|
||||
model_params = {}
|
||||
if 'environment_params' in new:
|
||||
del new['environment_params']
|
||||
model_params = dict(old['environment_params'])
|
||||
if "environment_params" in new:
|
||||
del new["environment_params"]
|
||||
model_params = dict(old["environment_params"])
|
||||
|
||||
if 'environment_class' in old:
|
||||
del new['environment_class']
|
||||
new['model_class'] = old['environment_class']
|
||||
if "environment_class" in old:
|
||||
del new["environment_class"]
|
||||
new["model_class"] = old["environment_class"]
|
||||
|
||||
if 'dump' in old:
|
||||
del new['dump']
|
||||
new['dry_run'] = not old['dump']
|
||||
if "dump" in old:
|
||||
del new["dump"]
|
||||
new["dry_run"] = not old["dump"]
|
||||
|
||||
model_params['topologies'] = topologies
|
||||
model_params['agents'] = agents
|
||||
model_params["topology"] = topology
|
||||
model_params["agents"] = agents
|
||||
|
||||
return Config(version='2',
|
||||
model_params=model_params,
|
||||
**new)
|
||||
return Config(version="2", model_params=model_params, **new)
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
from mesa import DataCollector as MDC
|
||||
|
||||
class SoilDataCollector(MDC):
|
||||
|
||||
class SoilDataCollector(MDC):
|
||||
def __init__(self, *args, **kwargs):
|
||||
super().__init__(*args, **kwargs)
|
||||
|
||||
@@ -18,9 +18,9 @@ def wrapcmd(func):
|
||||
known = globals()
|
||||
known.update(self.curframe.f_globals)
|
||||
known.update(self.curframe.f_locals)
|
||||
known['agent'] = known.get('self', None)
|
||||
known['model'] = known.get('self', {}).get('model')
|
||||
known['attrs'] = arg.strip().split()
|
||||
known["agent"] = known.get("self", None)
|
||||
known["model"] = known.get("self", {}).get("model")
|
||||
known["attrs"] = arg.strip().split()
|
||||
|
||||
exec(func.__code__, known, known)
|
||||
|
||||
@@ -29,10 +29,12 @@ def wrapcmd(func):
|
||||
|
||||
class Debug(pdb.Pdb):
|
||||
def __init__(self, *args, skip_soil=False, **kwargs):
|
||||
skip = kwargs.get('skip', [])
|
||||
skip = kwargs.get("skip", [])
|
||||
if skip_soil:
|
||||
skip.append('soil.*')
|
||||
skip.append('mesa.*')
|
||||
skip.append("soil")
|
||||
skip.append("contextlib")
|
||||
skip.append("soil.*")
|
||||
skip.append("mesa.*")
|
||||
super(Debug, self).__init__(*args, skip=skip, **kwargs)
|
||||
self.prompt = "[soil-pdb] "
|
||||
|
||||
@@ -40,7 +42,7 @@ class Debug(pdb.Pdb):
|
||||
def _soil_agents(model, attrs=None, pretty=True, **kwargs):
|
||||
for agent in model.agents(**kwargs):
|
||||
d = agent
|
||||
print(' - ' + indent(agent.to_str(keys=attrs, pretty=pretty), ' '))
|
||||
print(" - " + indent(agent.to_str(keys=attrs, pretty=pretty), " "))
|
||||
|
||||
@wrapcmd
|
||||
def do_soil_agents():
|
||||
@@ -50,14 +52,20 @@ class Debug(pdb.Pdb):
|
||||
|
||||
@wrapcmd
|
||||
def do_soil_list():
|
||||
return Debug._soil_agents(model, attrs=['state_id'], pretty=False)
|
||||
return Debug._soil_agents(model, attrs=["state_id"], pretty=False)
|
||||
|
||||
do_sl = do_soil_list
|
||||
|
||||
def do_continue_state(self, arg):
|
||||
self.do_break_state(arg, temporary=True)
|
||||
return self.do_continue("")
|
||||
|
||||
do_cs = do_continue_state
|
||||
|
||||
@wrapcmd
|
||||
def do_soil_self():
|
||||
def do_soil_agent():
|
||||
if not agent:
|
||||
print('No agent available')
|
||||
print("No agent available")
|
||||
return
|
||||
|
||||
keys = None
|
||||
@@ -70,46 +78,56 @@ class Debug(pdb.Pdb):
|
||||
|
||||
print(agent.to_str(pretty=True, keys=keys))
|
||||
|
||||
do_ss = do_soil_self
|
||||
do_aa = do_soil_agent
|
||||
|
||||
def do_break_state(self, arg: str, temporary=False):
|
||||
'''
|
||||
def do_break_state(self, arg: str, instances=None, temporary=False):
|
||||
"""
|
||||
Break before a specified state is stepped into.
|
||||
'''
|
||||
"""
|
||||
|
||||
klass = None
|
||||
state = arg.strip()
|
||||
state = arg
|
||||
if not state:
|
||||
self.error("Specify at least a state name")
|
||||
return
|
||||
|
||||
comma = arg.find(':')
|
||||
if comma > 0:
|
||||
state = arg[comma+1:].lstrip()
|
||||
klass = arg[:comma].rstrip()
|
||||
klass = eval(klass,
|
||||
self.curframe.f_globals,
|
||||
self.curframe_locals)
|
||||
state, *tokens = state.lstrip().split()
|
||||
if tokens:
|
||||
instances = list(eval(token) for token in tokens)
|
||||
|
||||
colon = state.find(":")
|
||||
|
||||
if colon > 0:
|
||||
klass = state[:colon].rstrip()
|
||||
state = state[colon + 1 :].strip()
|
||||
|
||||
print(klass, state, tokens)
|
||||
klass = eval(klass, self.curframe.f_globals, self.curframe_locals)
|
||||
|
||||
if klass:
|
||||
klasses = [klass]
|
||||
else:
|
||||
klasses = [k for k in self.curframe.f_globals.values() if isinstance(k, type) and issubclass(k, FSM)]
|
||||
print(klasses)
|
||||
if not klasses:
|
||||
self.error('No agent classes found')
|
||||
klasses = [
|
||||
k
|
||||
for k in self.curframe.f_globals.values()
|
||||
if isinstance(k, type) and issubclass(k, FSM)
|
||||
]
|
||||
|
||||
if not klasses:
|
||||
self.error("No agent classes found")
|
||||
|
||||
for klass in klasses:
|
||||
try:
|
||||
func = getattr(klass, state)
|
||||
except AttributeError:
|
||||
self.error(f"State {state} not found in class {klass}")
|
||||
continue
|
||||
if hasattr(func, '__func__'):
|
||||
if hasattr(func, "__func__"):
|
||||
func = func.__func__
|
||||
|
||||
code = func.__code__
|
||||
#use co_name to identify the bkpt (function names
|
||||
#could be aliased, but co_name is invariant)
|
||||
# use co_name to identify the bkpt (function names
|
||||
# could be aliased, but co_name is invariant)
|
||||
funcname = code.co_name
|
||||
lineno = code.co_firstlineno
|
||||
filename = code.co_filename
|
||||
@@ -117,35 +135,56 @@ class Debug(pdb.Pdb):
|
||||
# Check for reasonable breakpoint
|
||||
line = self.checkline(filename, lineno)
|
||||
if not line:
|
||||
raise ValueError('no line found')
|
||||
raise ValueError("no line found")
|
||||
# now set the break point
|
||||
cond = None
|
||||
if instances:
|
||||
cond = f"self.unique_id in { repr(instances) }"
|
||||
|
||||
existing = self.get_breaks(filename, line)
|
||||
if existing:
|
||||
self.message("Breakpoint already exists at %s:%d" %
|
||||
(filename, line))
|
||||
self.message("Breakpoint already exists at %s:%d" % (filename, line))
|
||||
continue
|
||||
err = self.set_break(filename, line, temporary, cond, funcname)
|
||||
if err:
|
||||
self.error(err)
|
||||
else:
|
||||
bp = self.get_breaks(filename, line)[-1]
|
||||
self.message("Breakpoint %d at %s:%d" %
|
||||
(bp.number, bp.file, bp.line))
|
||||
self.message("Breakpoint %d at %s:%d" % (bp.number, bp.file, bp.line))
|
||||
|
||||
do_bs = do_break_state
|
||||
|
||||
def do_break_state_self(self, arg: str, temporary=False):
|
||||
"""
|
||||
Break before a specified state is stepped into, for the current agent
|
||||
"""
|
||||
agent = self.curframe.f_locals.get("self")
|
||||
if not agent:
|
||||
self.error("No current agent.")
|
||||
self.error("Try this again when the debugger is stopped inside an agent")
|
||||
return
|
||||
|
||||
def setup(frame=None):
|
||||
debugger = Debug()
|
||||
arg = f"{agent.__class__.__name__}:{ arg } {agent.unique_id}"
|
||||
return self.do_break_state(arg)
|
||||
|
||||
do_bss = do_break_state_self
|
||||
|
||||
|
||||
debugger = None
|
||||
|
||||
|
||||
def set_trace(frame=None, **kwargs):
|
||||
global debugger
|
||||
if debugger is None:
|
||||
debugger = Debug(**kwargs)
|
||||
frame = frame or sys._getframe().f_back
|
||||
debugger.set_trace(frame)
|
||||
|
||||
def debug_env():
|
||||
if os.environ.get('SOIL_DEBUG'):
|
||||
return setup(frame=sys._getframe().f_back)
|
||||
|
||||
def post_mortem(traceback=None):
|
||||
p = Debug()
|
||||
def post_mortem(traceback=None, **kwargs):
|
||||
global debugger
|
||||
if debugger is None:
|
||||
debugger = Debug(**kwargs)
|
||||
t = sys.exc_info()[2]
|
||||
p.reset()
|
||||
p.interaction(None, t)
|
||||
debugger.reset()
|
||||
debugger.interaction(None, t)
|
||||
|
||||
@@ -3,8 +3,8 @@ from __future__ import annotations
|
||||
import os
|
||||
import sqlite3
|
||||
import math
|
||||
import random
|
||||
import logging
|
||||
import inspect
|
||||
|
||||
from typing import Any, Dict, Optional, Union
|
||||
from collections import namedtuple
|
||||
@@ -18,10 +18,7 @@ import networkx as nx
|
||||
from mesa import Model
|
||||
from mesa.datacollection import DataCollector
|
||||
|
||||
from . import agents as agentmod, config, serialization, utils, time, network
|
||||
|
||||
|
||||
Record = namedtuple('Record', 'dict_id t_step key value')
|
||||
from . import agents as agentmod, config, serialization, utils, time, network, events
|
||||
|
||||
|
||||
class BaseEnvironment(Model):
|
||||
@@ -37,20 +34,24 @@ class BaseEnvironment(Model):
|
||||
:meth:`soil.environment.Environment.get` method.
|
||||
"""
|
||||
|
||||
def __init__(self,
|
||||
id='unnamed_env',
|
||||
seed='default',
|
||||
schedule=None,
|
||||
dir_path=None,
|
||||
interval=1,
|
||||
agent_class=None,
|
||||
agents: [tuple[type, Dict[str, Any]]] = {},
|
||||
agent_reporters: Optional[Any] = None,
|
||||
model_reporters: Optional[Any] = None,
|
||||
tables: Optional[Any] = None,
|
||||
**env_params):
|
||||
def __init__(
|
||||
self,
|
||||
id="unnamed_env",
|
||||
seed="default",
|
||||
schedule=None,
|
||||
dir_path=None,
|
||||
interval=1,
|
||||
agent_class=None,
|
||||
agents: [tuple[type, Dict[str, Any]]] = {},
|
||||
agent_reporters: Optional[Any] = None,
|
||||
model_reporters: Optional[Any] = None,
|
||||
tables: Optional[Any] = None,
|
||||
**env_params,
|
||||
):
|
||||
|
||||
super().__init__(seed=seed)
|
||||
self.env_params = env_params or {}
|
||||
|
||||
self.current_id = -1
|
||||
|
||||
self.id = id
|
||||
@@ -63,11 +64,8 @@ class BaseEnvironment(Model):
|
||||
|
||||
self.agent_class = agent_class or agentmod.BaseAgent
|
||||
|
||||
self.init_agents(agents)
|
||||
|
||||
self.env_params = env_params or {}
|
||||
|
||||
self.interval = interval
|
||||
self.init_agents(agents)
|
||||
|
||||
self.logger = utils.logger.getChild(self.id)
|
||||
|
||||
@@ -77,17 +75,27 @@ class BaseEnvironment(Model):
|
||||
tables=tables,
|
||||
)
|
||||
|
||||
def _read_single_agent(self, agent):
|
||||
def _agent_from_dict(self, agent):
|
||||
"""
|
||||
Translate an agent dictionary into an agent
|
||||
"""
|
||||
agent = dict(**agent)
|
||||
cls = agent.pop('agent_class', None) or self.agent_class
|
||||
unique_id = agent.pop('unique_id', None)
|
||||
cls = agent.pop("agent_class", None) or self.agent_class
|
||||
unique_id = agent.pop("unique_id", None)
|
||||
if unique_id is None:
|
||||
unique_id = self.next_id()
|
||||
|
||||
return serialization.deserialize(cls)(unique_id=unique_id,
|
||||
model=self, **agent)
|
||||
return serialization.deserialize(cls)(unique_id=unique_id, model=self, **agent)
|
||||
|
||||
def init_agents(self, agents: Union[config.AgentConfig, [Dict[str, Any]]] = {}):
|
||||
"""
|
||||
Initialize the agents in the model from either a `soil.config.AgentConfig` or a list of
|
||||
dictionaries that each describes an agent.
|
||||
|
||||
If given a list of dictionaries, an agent will be created for each dictionary. The agent
|
||||
class can be specified through the `agent_class` key. The rest of the items will be used
|
||||
as parameters to the agent.
|
||||
"""
|
||||
if not agents:
|
||||
return
|
||||
|
||||
@@ -98,14 +106,11 @@ class BaseEnvironment(Model):
|
||||
lst = config.AgentConfig(**agents)
|
||||
if lst.override:
|
||||
override = lst.override
|
||||
lst = agentmod.from_config(lst,
|
||||
topologies=getattr(self, 'topologies', None),
|
||||
random=self.random)
|
||||
lst = self._agent_dict_from_config(lst)
|
||||
|
||||
#TODO: check override is working again. It cannot (easily) be part of agents.from_config anymore,
|
||||
# TODO: check override is working again. It cannot (easily) be part of agents.from_config anymore,
|
||||
# because it needs attribute such as unique_id, which are only present after init
|
||||
new_agents = [self._read_single_agent(agent) for agent in lst]
|
||||
|
||||
new_agents = [self._agent_from_dict(agent) for agent in lst]
|
||||
|
||||
for a in new_agents:
|
||||
self.schedule.add(a)
|
||||
@@ -115,6 +120,8 @@ class BaseEnvironment(Model):
|
||||
for attr, value in rule.state.items():
|
||||
setattr(agent, attr, value)
|
||||
|
||||
def _agent_dict_from_config(self, cfg):
|
||||
return agentmod.from_config(cfg, random=self.random)
|
||||
|
||||
@property
|
||||
def agents(self):
|
||||
@@ -130,15 +137,16 @@ class BaseEnvironment(Model):
|
||||
def now(self):
|
||||
if self.schedule:
|
||||
return self.schedule.time
|
||||
raise Exception('The environment has not been scheduled, so it has no sense of time')
|
||||
raise Exception(
|
||||
"The environment has not been scheduled, so it has no sense of time"
|
||||
)
|
||||
|
||||
def add_agent(self, unique_id=None, **kwargs):
|
||||
if unique_id is None:
|
||||
unique_id = self.next_id()
|
||||
|
||||
def add_agent(self, agent_id, agent_class, **kwargs):
|
||||
a = None
|
||||
if agent_class:
|
||||
a = agent_class(model=self,
|
||||
unique_id=agent_id,
|
||||
**kwargs)
|
||||
kwargs["unique_id"] = unique_id
|
||||
a = self._agent_from_dict(kwargs)
|
||||
|
||||
self.schedule.add(a)
|
||||
return a
|
||||
@@ -151,16 +159,18 @@ class BaseEnvironment(Model):
|
||||
for k, v in kwargs:
|
||||
message += " {k}={v} ".format(k, v)
|
||||
extra = {}
|
||||
extra['now'] = self.now
|
||||
extra['id'] = self.id
|
||||
extra["now"] = self.now
|
||||
extra["id"] = self.id
|
||||
return self.logger.log(level, message, extra=extra)
|
||||
|
||||
def step(self):
|
||||
'''
|
||||
"""
|
||||
Advance one step in the simulation, and update the data collection and scheduler appropriately
|
||||
'''
|
||||
"""
|
||||
super().step()
|
||||
self.logger.info(f'--- Step {self.now:^5} ---')
|
||||
self.logger.info(
|
||||
f"--- Step: {self.schedule.steps:^5} - Time: {self.now:^5} ---"
|
||||
)
|
||||
self.schedule.step()
|
||||
self.datacollector.collect(self)
|
||||
|
||||
@@ -168,10 +178,10 @@ class BaseEnvironment(Model):
|
||||
return key in self.env_params
|
||||
|
||||
def get(self, key, default=None):
|
||||
'''
|
||||
"""
|
||||
Get the value of an environment attribute.
|
||||
Return `default` if the value is not set.
|
||||
'''
|
||||
"""
|
||||
return self.env_params.get(key, default)
|
||||
|
||||
def __getitem__(self, key):
|
||||
@@ -180,123 +190,135 @@ class BaseEnvironment(Model):
|
||||
def __setitem__(self, key, value):
|
||||
return self.env_params.__setitem__(key, value)
|
||||
|
||||
def _agent_to_tuples(self, agent, now=None):
|
||||
if now is None:
|
||||
now = self.now
|
||||
for k, v in agent.state.items():
|
||||
yield Record(dict_id=agent.id,
|
||||
t_step=now,
|
||||
key=k,
|
||||
value=v)
|
||||
|
||||
def state_to_tuples(self, agent_id=None, now=None):
|
||||
if now is None:
|
||||
now = self.now
|
||||
|
||||
if agent_id:
|
||||
agent = self.agents[agent_id]
|
||||
yield from self._agent_to_tuples(agent, now)
|
||||
return
|
||||
|
||||
for k, v in self.env_params.items():
|
||||
yield Record(dict_id='env',
|
||||
t_step=now,
|
||||
key=k,
|
||||
value=v)
|
||||
for agent in self.agents:
|
||||
yield from self._agent_to_tuples(agent, now)
|
||||
def __str__(self):
|
||||
return str(self.env_params)
|
||||
|
||||
|
||||
class NetworkEnvironment(BaseEnvironment):
|
||||
"""
|
||||
The NetworkEnvironment is an environment that includes one or more networkx.Graph intances
|
||||
and methods to associate agents to nodes and vice versa.
|
||||
"""
|
||||
|
||||
def __init__(self, *args, topology: nx.Graph = None, topologies: Dict[str, config.NetConfig] = {}, **kwargs):
|
||||
agents = kwargs.pop('agents', None)
|
||||
def __init__(
|
||||
self, *args, topology: Union[config.NetConfig, nx.Graph] = None, **kwargs
|
||||
):
|
||||
agents = kwargs.pop("agents", None)
|
||||
super().__init__(*args, agents=None, **kwargs)
|
||||
self._node_ids = {}
|
||||
assert not hasattr(self, 'topologies')
|
||||
if topology is not None:
|
||||
if topologies:
|
||||
raise ValueError('Please, provide either a single topology or a dictionary of them')
|
||||
topologies = {'default': topology}
|
||||
|
||||
self.topologies = {}
|
||||
for (name, cfg) in topologies.items():
|
||||
self.set_topology(cfg=cfg, graph=name)
|
||||
self._set_topology(topology)
|
||||
|
||||
self.init_agents(agents)
|
||||
|
||||
def init_agents(self, *args, **kwargs):
|
||||
"""Initialize the agents from a"""
|
||||
super().init_agents(*args, **kwargs)
|
||||
for agent in self.schedule._agents.values():
|
||||
if hasattr(agent, "node_id"):
|
||||
self._init_node(agent)
|
||||
|
||||
def _read_single_agent(self, agent, unique_id=None):
|
||||
def _init_node(self, agent):
|
||||
"""
|
||||
Make sure the node for a given agent has the proper attributes.
|
||||
"""
|
||||
self.G.nodes[agent.node_id]["agent"] = agent
|
||||
|
||||
def _agent_dict_from_config(self, cfg):
|
||||
return agentmod.from_config(cfg, topology=self.G, random=self.random)
|
||||
|
||||
def _agent_from_dict(self, agent, unique_id=None):
|
||||
agent = dict(agent)
|
||||
|
||||
if agent.get('topology', None) is not None:
|
||||
topology = agent.get('topology')
|
||||
if unique_id is None:
|
||||
unique_id = self.next_id()
|
||||
if topology:
|
||||
node_id = self.agent_to_node(unique_id, graph_name=topology, node_id=agent.get('node_id'))
|
||||
agent['node_id'] = node_id
|
||||
agent['topology'] = topology
|
||||
agent['unique_id'] = unique_id
|
||||
if not agent.get("topology", False):
|
||||
return super()._agent_from_dict(agent)
|
||||
|
||||
return super()._read_single_agent(agent)
|
||||
if unique_id is None:
|
||||
unique_id = self.next_id()
|
||||
node_id = agent.get("node_id", None)
|
||||
if node_id is None:
|
||||
node_id = network.find_unassigned(self.G, random=self.random)
|
||||
self.G.nodes[node_id]["agent"] = None
|
||||
agent["node_id"] = node_id
|
||||
agent["unique_id"] = unique_id
|
||||
agent["topology"] = self.G
|
||||
node_attrs = self.G.nodes[node_id]
|
||||
node_attrs.update(agent)
|
||||
agent = node_attrs
|
||||
|
||||
a = super()._agent_from_dict(agent)
|
||||
self._init_node(a)
|
||||
|
||||
@property
|
||||
def topology(self):
|
||||
return self.topologies['default']
|
||||
return a
|
||||
|
||||
def set_topology(self, cfg=None, dir_path=None, graph='default'):
|
||||
topology = cfg
|
||||
if not isinstance(cfg, nx.Graph):
|
||||
topology = network.from_config(cfg, dir_path=dir_path or self.dir_path)
|
||||
def _set_topology(self, cfg=None, dir_path=None):
|
||||
if cfg is None:
|
||||
cfg = nx.Graph()
|
||||
elif not isinstance(cfg, nx.Graph):
|
||||
cfg = network.from_config(cfg, dir_path=dir_path or self.dir_path)
|
||||
|
||||
self.topologies[graph] = topology
|
||||
|
||||
def topology_for(self, unique_id):
|
||||
return self.topologies[self._node_ids[unique_id][0]]
|
||||
self.G = cfg
|
||||
|
||||
@property
|
||||
def network_agents(self):
|
||||
yield from self.agents(agent_class=agentmod.NetworkAgent)
|
||||
for a in self.schedule._agents:
|
||||
if isinstance(a, agentmod.NetworkAgent):
|
||||
yield a
|
||||
|
||||
def agent_to_node(self, unique_id, graph_name='default',
|
||||
node_id=None, shuffle=False):
|
||||
node_id = network.agent_to_node(G=self.topologies[graph_name],
|
||||
agent_id=unique_id,
|
||||
node_id=node_id,
|
||||
shuffle=shuffle,
|
||||
random=self.random)
|
||||
def add_node(self, agent_class, unique_id=None, node_id=None, **kwargs):
|
||||
if unique_id is None:
|
||||
unique_id = self.next_id()
|
||||
if node_id is None:
|
||||
node_id = network.find_unassigned(
|
||||
G=self.G, shuffle=True, random=self.random
|
||||
)
|
||||
if node_id is None:
|
||||
node_id = f"node_for_{unique_id}"
|
||||
|
||||
self._node_ids[unique_id] = (graph_name, node_id)
|
||||
return node_id
|
||||
if node_id not in self.G.nodes:
|
||||
self.G.add_node(node_id)
|
||||
|
||||
def add_node(self, agent_class, topology, **kwargs):
|
||||
unique_id = self.next_id()
|
||||
self.topologies[topology].add_node(unique_id)
|
||||
node_id = self.agent_to_node(unique_id=unique_id, node_id=unique_id, graph_name=topology)
|
||||
assert "agent" not in self.G.nodes[node_id]
|
||||
self.G.nodes[node_id]["agent"] = None # Reserve
|
||||
|
||||
a = self.add_agent(unique_id=unique_id, agent_class=agent_class, node_id=node_id, topology=topology, **kwargs)
|
||||
a['visible'] = True
|
||||
a = self.add_agent(
|
||||
unique_id=unique_id,
|
||||
agent_class=agent_class,
|
||||
topology=self.G,
|
||||
node_id=node_id,
|
||||
**kwargs,
|
||||
)
|
||||
a["visible"] = True
|
||||
return a
|
||||
|
||||
def add_edge(self, agent1, agent2, start=None, graph='default', **attrs):
|
||||
agent1 = agent1.node_id
|
||||
agent2 = agent2.node_id
|
||||
return self.topologies[graph].add_edge(agent1, agent2, start=start)
|
||||
|
||||
def add_agent(self, unique_id, state=None, graph='default', **kwargs):
|
||||
node = self.topologies[graph].nodes[unique_id]
|
||||
node_state = node.get('state', {})
|
||||
if node_state:
|
||||
node_state.update(state or {})
|
||||
state = node_state
|
||||
a = super().add_agent(unique_id, state=state, **kwargs)
|
||||
node['agent'] = a
|
||||
def add_agent(self, *args, **kwargs):
|
||||
a = super().add_agent(*args, **kwargs)
|
||||
if "node_id" in a:
|
||||
assert self.G.nodes[a.node_id]["agent"] == a
|
||||
return a
|
||||
|
||||
def node_id_for(self, agent_id):
|
||||
return self._node_ids[agent_id][1]
|
||||
def agent_for_node_id(self, node_id):
|
||||
return self.G.nodes[node_id].get("agent")
|
||||
|
||||
def populate_network(self, agent_class, weights=None, **agent_params):
|
||||
if not hasattr(agent_class, "len"):
|
||||
agent_class = [agent_class]
|
||||
weights = None
|
||||
for (node_id, node) in self.G.nodes(data=True):
|
||||
if "agent" in node:
|
||||
continue
|
||||
a_class = self.random.choices(agent_class, weights)[0]
|
||||
self.add_agent(node_id=node_id, agent_class=a_class, **agent_params)
|
||||
|
||||
|
||||
Environment = NetworkEnvironment
|
||||
|
||||
|
||||
class EventedEnvironment(Environment):
|
||||
def broadcast(self, msg, sender, expiration=None, ttl=None, **kwargs):
|
||||
for agent in self.agents(**kwargs):
|
||||
self.logger.info(f'Telling {repr(agent)}: {msg} ttl={ttl}')
|
||||
try:
|
||||
agent._inbox.append(events.Tell(payload=msg, sender=sender, expiration=expiration if ttl is None else self.now+ttl))
|
||||
except AttributeError:
|
||||
self.info(f'Agent {agent.unique_id} cannot receive events')
|
||||
|
||||
|
||||
43
soil/events.py
Normal file
43
soil/events.py
Normal file
@@ -0,0 +1,43 @@
|
||||
from .time import Cond
|
||||
from dataclasses import dataclass, field
|
||||
from typing import Any
|
||||
from uuid import uuid4
|
||||
|
||||
class Event:
|
||||
pass
|
||||
|
||||
@dataclass
|
||||
class Message:
|
||||
payload: Any
|
||||
sender: Any = None
|
||||
expiration: float = None
|
||||
id: int = field(default_factory=uuid4)
|
||||
|
||||
def expired(self, when):
|
||||
return self.expiration is not None and self.expiration < when
|
||||
|
||||
class Reply(Message):
|
||||
source: Message
|
||||
|
||||
|
||||
class Ask(Message):
|
||||
reply: Message = None
|
||||
|
||||
def replied(self, expiration=None):
|
||||
def ready(agent):
|
||||
return self.reply is not None or agent.now > expiration
|
||||
|
||||
def value(agent):
|
||||
if agent.now > expiration:
|
||||
raise TimedOut(f'No answer received for {self}')
|
||||
return self.reply
|
||||
|
||||
return Cond(func=ready, return_func=value)
|
||||
|
||||
|
||||
class Tell(Message):
|
||||
pass
|
||||
|
||||
|
||||
class TimedOut(Exception):
|
||||
pass
|
||||
@@ -1,7 +1,9 @@
|
||||
import os
|
||||
import sys
|
||||
from time import time as current_time
|
||||
from io import BytesIO
|
||||
from sqlalchemy import create_engine
|
||||
from textwrap import dedent, indent
|
||||
|
||||
|
||||
import matplotlib.pyplot as plt
|
||||
@@ -9,7 +11,7 @@ import networkx as nx
|
||||
|
||||
|
||||
from .serialization import deserialize
|
||||
from .utils import open_or_reuse, logger, timer
|
||||
from .utils import try_backup, open_or_reuse, logger, timer
|
||||
|
||||
|
||||
from . import utils, network
|
||||
@@ -23,54 +25,58 @@ class DryRunner(BytesIO):
|
||||
|
||||
def write(self, txt):
|
||||
if self.__copy_to:
|
||||
self.__copy_to.write('{}:::{}'.format(self.__fname, txt))
|
||||
self.__copy_to.write("{}:::{}".format(self.__fname, txt))
|
||||
try:
|
||||
super().write(txt)
|
||||
except TypeError:
|
||||
super().write(bytes(txt, 'utf-8'))
|
||||
super().write(bytes(txt, "utf-8"))
|
||||
|
||||
def close(self):
|
||||
content = '(binary data not shown)'
|
||||
content = "(binary data not shown)"
|
||||
try:
|
||||
content = self.getvalue().decode()
|
||||
except UnicodeDecodeError:
|
||||
pass
|
||||
logger.info('**Not** written to {} (dry run mode):\n\n{}\n\n'.format(self.__fname, content))
|
||||
logger.info(
|
||||
"**Not** written to {} (dry run mode):\n\n{}\n\n".format(
|
||||
self.__fname, content
|
||||
)
|
||||
)
|
||||
super().close()
|
||||
|
||||
|
||||
class Exporter:
|
||||
'''
|
||||
"""
|
||||
Interface for all exporters. It is not necessary, but it is useful
|
||||
if you don't plan to implement all the methods.
|
||||
'''
|
||||
"""
|
||||
|
||||
def __init__(self, simulation, outdir=None, dry_run=None, copy_to=None):
|
||||
self.simulation = simulation
|
||||
outdir = outdir or os.path.join(os.getcwd(), 'soil_output')
|
||||
self.outdir = os.path.join(outdir,
|
||||
simulation.group or '',
|
||||
simulation.name)
|
||||
outdir = outdir or os.path.join(os.getcwd(), "soil_output")
|
||||
self.outdir = os.path.join(outdir, simulation.group or "", simulation.name)
|
||||
self.dry_run = dry_run
|
||||
if copy_to is None and dry_run:
|
||||
copy_to = sys.stdout
|
||||
self.copy_to = copy_to
|
||||
|
||||
def sim_start(self):
|
||||
'''Method to call when the simulation starts'''
|
||||
"""Method to call when the simulation starts"""
|
||||
pass
|
||||
|
||||
def sim_end(self):
|
||||
'''Method to call when the simulation ends'''
|
||||
"""Method to call when the simulation ends"""
|
||||
pass
|
||||
|
||||
def trial_start(self, env):
|
||||
'''Method to call when a trial start'''
|
||||
"""Method to call when a trial start"""
|
||||
pass
|
||||
|
||||
def trial_end(self, env):
|
||||
'''Method to call when a trial ends'''
|
||||
"""Method to call when a trial ends"""
|
||||
pass
|
||||
|
||||
def output(self, f, mode='w', **kwargs):
|
||||
def output(self, f, mode="w", **kwargs):
|
||||
if self.dry_run:
|
||||
f = DryRunner(f, copy_to=self.copy_to)
|
||||
else:
|
||||
@@ -81,134 +87,127 @@ class Exporter:
|
||||
pass
|
||||
return open_or_reuse(f, mode=mode, **kwargs)
|
||||
|
||||
|
||||
class default(Exporter):
|
||||
'''Default exporter. Writes sqlite results, as well as the simulation YAML'''
|
||||
|
||||
def sim_start(self):
|
||||
if not self.dry_run:
|
||||
logger.info('Dumping results to %s', self.outdir)
|
||||
with self.output(self.simulation.name + '.dumped.yml') as f:
|
||||
f.write(self.simulation.to_yaml())
|
||||
else:
|
||||
logger.info('NOT dumping results')
|
||||
|
||||
def trial_end(self, env):
|
||||
if not self.dry_run:
|
||||
with timer('Dumping simulation {} trial {}'.format(self.simulation.name,
|
||||
env.id)):
|
||||
engine = create_engine('sqlite:///{}.sqlite'.format(env.id), echo=False)
|
||||
|
||||
dc = env.datacollector
|
||||
for (t, df) in get_dc_dfs(dc):
|
||||
df.to_sql(t, con=engine, if_exists='append')
|
||||
def get_dfs(self, env):
|
||||
yield from get_dc_dfs(env.datacollector, trial_id=env.id)
|
||||
|
||||
|
||||
def get_dc_dfs(dc):
|
||||
dfs = {'env': dc.get_model_vars_dataframe(),
|
||||
'agents': dc.get_agent_vars_dataframe() }
|
||||
def get_dc_dfs(dc, trial_id=None):
|
||||
dfs = {
|
||||
"env": dc.get_model_vars_dataframe(),
|
||||
"agents": dc.get_agent_vars_dataframe(),
|
||||
}
|
||||
for table_name in dc.tables:
|
||||
dfs[table_name] = dc.get_table_dataframe(table_name)
|
||||
if trial_id:
|
||||
for (name, df) in dfs.items():
|
||||
df["trial_id"] = trial_id
|
||||
yield from dfs.items()
|
||||
|
||||
|
||||
class default(Exporter):
|
||||
"""Default exporter. Writes sqlite results, as well as the simulation YAML"""
|
||||
|
||||
def sim_start(self):
|
||||
if self.dry_run:
|
||||
logger.info("NOT dumping results")
|
||||
return
|
||||
logger.info("Dumping results to %s", self.outdir)
|
||||
with self.output(self.simulation.name + ".dumped.yml") as f:
|
||||
f.write(self.simulation.to_yaml())
|
||||
self.dbpath = os.path.join(self.outdir, f"{self.simulation.name}.sqlite")
|
||||
try_backup(self.dbpath, remove=True)
|
||||
|
||||
def trial_end(self, env):
|
||||
if self.dry_run:
|
||||
logger.info("Running in DRY_RUN mode, the database will NOT be created")
|
||||
return
|
||||
|
||||
with timer(
|
||||
"Dumping simulation {} trial {}".format(self.simulation.name, env.id)
|
||||
):
|
||||
|
||||
engine = create_engine(f"sqlite:///{self.dbpath}", echo=False)
|
||||
|
||||
for (t, df) in self.get_dfs(env):
|
||||
df.to_sql(t, con=engine, if_exists="append")
|
||||
|
||||
|
||||
class csv(Exporter):
|
||||
|
||||
'''Export the state of each environment (and its agents) in a separate CSV file'''
|
||||
"""Export the state of each environment (and its agents) in a separate CSV file"""
|
||||
|
||||
def trial_end(self, env):
|
||||
with timer('[CSV] Dumping simulation {} trial {} @ dir {}'.format(self.simulation.name,
|
||||
env.id,
|
||||
self.outdir)):
|
||||
for (df_name, df) in get_dc_dfs(env.datacollector):
|
||||
with self.output('{}.{}.csv'.format(env.id, df_name)) as f:
|
||||
with timer(
|
||||
"[CSV] Dumping simulation {} trial {} @ dir {}".format(
|
||||
self.simulation.name, env.id, self.outdir
|
||||
)
|
||||
):
|
||||
for (df_name, df) in self.get_dfs(env):
|
||||
with self.output("{}.{}.csv".format(env.id, df_name)) as f:
|
||||
df.to_csv(f)
|
||||
|
||||
|
||||
#TODO: reimplement GEXF exporting without history
|
||||
# TODO: reimplement GEXF exporting without history
|
||||
class gexf(Exporter):
|
||||
def trial_end(self, env):
|
||||
if self.dry_run:
|
||||
logger.info('Not dumping GEXF in dry_run mode')
|
||||
logger.info("Not dumping GEXF in dry_run mode")
|
||||
return
|
||||
|
||||
with timer('[GEXF] Dumping simulation {} trial {}'.format(self.simulation.name,
|
||||
env.id)):
|
||||
with self.output('{}.gexf'.format(env.id), mode='wb') as f:
|
||||
with timer(
|
||||
"[GEXF] Dumping simulation {} trial {}".format(self.simulation.name, env.id)
|
||||
):
|
||||
with self.output("{}.gexf".format(env.id), mode="wb") as f:
|
||||
network.dump_gexf(env.history_to_graph(), f)
|
||||
self.dump_gexf(env, f)
|
||||
|
||||
|
||||
class dummy(Exporter):
|
||||
|
||||
def sim_start(self):
|
||||
with self.output('dummy', 'w') as f:
|
||||
f.write('simulation started @ {}\n'.format(current_time()))
|
||||
with self.output("dummy", "w") as f:
|
||||
f.write("simulation started @ {}\n".format(current_time()))
|
||||
|
||||
def trial_start(self, env):
|
||||
with self.output('dummy', 'w') as f:
|
||||
f.write('trial started@ {}\n'.format(current_time()))
|
||||
with self.output("dummy", "w") as f:
|
||||
f.write("trial started@ {}\n".format(current_time()))
|
||||
|
||||
def trial_end(self, env):
|
||||
with self.output('dummy', 'w') as f:
|
||||
f.write('trial ended@ {}\n'.format(current_time()))
|
||||
with self.output("dummy", "w") as f:
|
||||
f.write("trial ended@ {}\n".format(current_time()))
|
||||
|
||||
def sim_end(self):
|
||||
with self.output('dummy', 'a') as f:
|
||||
f.write('simulation ended @ {}\n'.format(current_time()))
|
||||
with self.output("dummy", "a") as f:
|
||||
f.write("simulation ended @ {}\n".format(current_time()))
|
||||
|
||||
|
||||
class graphdrawing(Exporter):
|
||||
|
||||
def trial_end(self, env):
|
||||
# Outside effects
|
||||
f = plt.figure()
|
||||
nx.draw(env.G, node_size=10, width=0.2, pos=nx.spring_layout(env.G, scale=100), ax=f.add_subplot(111))
|
||||
with open('graph-{}.png'.format(env.id)) as f:
|
||||
nx.draw(
|
||||
env.G,
|
||||
node_size=10,
|
||||
width=0.2,
|
||||
pos=nx.spring_layout(env.G, scale=100),
|
||||
ax=f.add_subplot(111),
|
||||
)
|
||||
with open("graph-{}.png".format(env.id)) as f:
|
||||
f.savefig(f)
|
||||
|
||||
'''
|
||||
Convert an environment into a NetworkX graph
|
||||
'''
|
||||
def env_to_graph(env, history=None):
|
||||
G = nx.Graph(env.G)
|
||||
|
||||
for agent in env.network_agents:
|
||||
class summary(Exporter):
|
||||
"""Print a summary of each trial to sys.stdout"""
|
||||
|
||||
attributes = {'agent': str(agent.__class__)}
|
||||
lastattributes = {}
|
||||
spells = []
|
||||
lastvisible = False
|
||||
laststep = None
|
||||
if not history:
|
||||
history = sorted(list(env.state_to_tuples()))
|
||||
for _, t_step, attribute, value in history:
|
||||
if attribute == 'visible':
|
||||
nowvisible = value
|
||||
if nowvisible and not lastvisible:
|
||||
laststep = t_step
|
||||
if not nowvisible and lastvisible:
|
||||
spells.append((laststep, t_step))
|
||||
|
||||
lastvisible = nowvisible
|
||||
def trial_end(self, env):
|
||||
for (t, df) in self.get_dfs(env):
|
||||
if not len(df):
|
||||
continue
|
||||
key = 'attr_' + attribute
|
||||
if key not in attributes:
|
||||
attributes[key] = list()
|
||||
if key not in lastattributes:
|
||||
lastattributes[key] = (value, t_step)
|
||||
elif lastattributes[key][0] != value:
|
||||
last_value, laststep = lastattributes[key]
|
||||
commit_value = (last_value, laststep, t_step)
|
||||
if key not in attributes:
|
||||
attributes[key] = list()
|
||||
attributes[key].append(commit_value)
|
||||
lastattributes[key] = (value, t_step)
|
||||
for k, v in lastattributes.items():
|
||||
attributes[k].append((v[0], v[1], None))
|
||||
if lastvisible:
|
||||
spells.append((laststep, None))
|
||||
if spells:
|
||||
G.add_node(agent.id, spells=spells, **attributes)
|
||||
else:
|
||||
G.add_node(agent.id, **attributes)
|
||||
|
||||
return G
|
||||
msg = indent(str(df.describe()), " ")
|
||||
logger.info(
|
||||
dedent(
|
||||
f"""
|
||||
Dataframe {t}:
|
||||
"""
|
||||
)
|
||||
+ msg
|
||||
)
|
||||
|
||||
@@ -9,6 +9,7 @@ import networkx as nx
|
||||
|
||||
from . import config, serialization, basestring
|
||||
|
||||
|
||||
def from_config(cfg: config.NetConfig, dir_path: str = None):
|
||||
if not isinstance(cfg, config.NetConfig):
|
||||
cfg = config.NetConfig(**cfg)
|
||||
@@ -19,60 +20,65 @@ def from_config(cfg: config.NetConfig, dir_path: str = None):
|
||||
path = os.path.join(dir_path, path)
|
||||
extension = os.path.splitext(path)[1][1:]
|
||||
kwargs = {}
|
||||
if extension == 'gexf':
|
||||
kwargs['version'] = '1.2draft'
|
||||
kwargs['node_type'] = int
|
||||
if extension == "gexf":
|
||||
kwargs["version"] = "1.2draft"
|
||||
kwargs["node_type"] = int
|
||||
try:
|
||||
method = getattr(nx.readwrite, 'read_' + extension)
|
||||
method = getattr(nx.readwrite, "read_" + extension)
|
||||
except AttributeError:
|
||||
raise AttributeError('Unknown format')
|
||||
raise AttributeError("Unknown format")
|
||||
return method(path, **kwargs)
|
||||
|
||||
if cfg.params:
|
||||
net_args = cfg.params.dict()
|
||||
net_gen = net_args.pop('generator')
|
||||
net_gen = net_args.pop("generator")
|
||||
|
||||
if dir_path not in sys.path:
|
||||
sys.path.append(dir_path)
|
||||
|
||||
method = serialization.deserializer(net_gen,
|
||||
known_modules=['networkx.generators',])
|
||||
method = serialization.deserializer(
|
||||
net_gen,
|
||||
known_modules=[
|
||||
"networkx.generators",
|
||||
],
|
||||
)
|
||||
return method(**net_args)
|
||||
|
||||
if isinstance(cfg.topology, config.Topology):
|
||||
cfg = cfg.topology.dict()
|
||||
if isinstance(cfg.fixed, config.Topology):
|
||||
cfg = cfg.fixed.dict()
|
||||
|
||||
if isinstance(cfg, str) or isinstance(cfg, dict):
|
||||
return nx.json_graph.node_link_graph(cfg)
|
||||
|
||||
return nx.Graph()
|
||||
|
||||
|
||||
def agent_to_node(G, agent_id, node_id=None, shuffle=False, random=random):
|
||||
'''
|
||||
def find_unassigned(G, shuffle=False, random=random):
|
||||
"""
|
||||
Link an agent to a node in a topology.
|
||||
|
||||
If node_id is None, a node without an agent_id will be found.
|
||||
'''
|
||||
#TODO: test
|
||||
if node_id is None:
|
||||
candidates = list(G.nodes(data=True))
|
||||
if shuffle:
|
||||
random.shuffle(candidates)
|
||||
for next_id, data in candidates:
|
||||
if data.get('agent_id', None) is None:
|
||||
node_id = next_id
|
||||
break
|
||||
|
||||
if node_id is None:
|
||||
raise ValueError(f"Not enough nodes in topology to assign one to agent {agent_id}")
|
||||
G.nodes[node_id]['agent_id'] = agent_id
|
||||
return node_id
|
||||
"""
|
||||
# TODO: test
|
||||
candidates = list(G.nodes(data=True))
|
||||
if shuffle:
|
||||
random.shuffle(candidates)
|
||||
for next_id, data in candidates:
|
||||
if "agent" not in data:
|
||||
return next_id
|
||||
return None
|
||||
|
||||
|
||||
def dump_gexf(G, f):
|
||||
for node in G.nodes():
|
||||
if 'pos' in G.nodes[node]:
|
||||
G.nodes[node]['viz'] = {"position": {"x": G.nodes[node]['pos'][0], "y": G.nodes[node]['pos'][1], "z": 0.0}}
|
||||
del (G.nodes[node]['pos'])
|
||||
if "pos" in G.nodes[node]:
|
||||
G.nodes[node]["viz"] = {
|
||||
"position": {
|
||||
"x": G.nodes[node]["pos"][0],
|
||||
"y": G.nodes[node]["pos"][1],
|
||||
"z": 0.0,
|
||||
}
|
||||
}
|
||||
del G.nodes[node]["pos"]
|
||||
|
||||
nx.write_gexf(G, f, version="1.2draft")
|
||||
|
||||
@@ -15,49 +15,14 @@ import networkx as nx
|
||||
from jinja2 import Template
|
||||
|
||||
|
||||
logger = logging.getLogger('soil')
|
||||
|
||||
|
||||
# def load_network(network_params, dir_path=None):
|
||||
# G = nx.Graph()
|
||||
|
||||
# if not network_params:
|
||||
# return G
|
||||
|
||||
# if 'path' in network_params:
|
||||
# path = network_params['path']
|
||||
# if dir_path and not os.path.isabs(path):
|
||||
# path = os.path.join(dir_path, path)
|
||||
# extension = os.path.splitext(path)[1][1:]
|
||||
# kwargs = {}
|
||||
# if extension == 'gexf':
|
||||
# kwargs['version'] = '1.2draft'
|
||||
# kwargs['node_type'] = int
|
||||
# try:
|
||||
# method = getattr(nx.readwrite, 'read_' + extension)
|
||||
# except AttributeError:
|
||||
# raise AttributeError('Unknown format')
|
||||
# G = method(path, **kwargs)
|
||||
|
||||
# elif 'generator' in network_params:
|
||||
# net_args = network_params.copy()
|
||||
# net_gen = net_args.pop('generator')
|
||||
|
||||
# if dir_path not in sys.path:
|
||||
# sys.path.append(dir_path)
|
||||
|
||||
# method = deserializer(net_gen,
|
||||
# known_modules=['networkx.generators',])
|
||||
# G = method(**net_args)
|
||||
|
||||
# return G
|
||||
logger = logging.getLogger("soil")
|
||||
|
||||
|
||||
def load_file(infile):
|
||||
folder = os.path.dirname(infile)
|
||||
if folder not in sys.path:
|
||||
sys.path.append(folder)
|
||||
with open(infile, 'r') as f:
|
||||
with open(infile, "r") as f:
|
||||
return list(chain.from_iterable(map(expand_template, load_string(f))))
|
||||
|
||||
|
||||
@@ -66,14 +31,15 @@ def load_string(string):
|
||||
|
||||
|
||||
def expand_template(config):
|
||||
if 'template' not in config:
|
||||
if "template" not in config:
|
||||
yield config
|
||||
return
|
||||
if 'vars' not in config:
|
||||
raise ValueError(('You must provide a definition of variables'
|
||||
' for the template.'))
|
||||
if "vars" not in config:
|
||||
raise ValueError(
|
||||
("You must provide a definition of variables" " for the template.")
|
||||
)
|
||||
|
||||
template = config['template']
|
||||
template = config["template"]
|
||||
|
||||
if not isinstance(template, str):
|
||||
template = yaml.dump(template)
|
||||
@@ -85,9 +51,9 @@ def expand_template(config):
|
||||
blank_str = template.render({k: 0 for k in params[0].keys()})
|
||||
blank = list(load_string(blank_str))
|
||||
if len(blank) > 1:
|
||||
raise ValueError('Templates must not return more than one configuration')
|
||||
if 'name' in blank[0]:
|
||||
raise ValueError('Templates cannot be named, use group instead')
|
||||
raise ValueError("Templates must not return more than one configuration")
|
||||
if "name" in blank[0]:
|
||||
raise ValueError("Templates cannot be named, use group instead")
|
||||
|
||||
for ps in params:
|
||||
string = template.render(ps)
|
||||
@@ -96,32 +62,32 @@ def expand_template(config):
|
||||
|
||||
|
||||
def params_for_template(config):
|
||||
sampler_config = config.get('sampler', {'N': 100})
|
||||
sampler = sampler_config.pop('method', 'SALib.sample.morris.sample')
|
||||
sampler_config = config.get("sampler", {"N": 100})
|
||||
sampler = sampler_config.pop("method", "SALib.sample.morris.sample")
|
||||
sampler = deserializer(sampler)
|
||||
bounds = config['vars']['bounds']
|
||||
bounds = config["vars"]["bounds"]
|
||||
|
||||
problem = {
|
||||
'num_vars': len(bounds),
|
||||
'names': list(bounds.keys()),
|
||||
'bounds': list(v for v in bounds.values())
|
||||
"num_vars": len(bounds),
|
||||
"names": list(bounds.keys()),
|
||||
"bounds": list(v for v in bounds.values()),
|
||||
}
|
||||
samples = sampler(problem, **sampler_config)
|
||||
|
||||
lists = config['vars'].get('lists', {})
|
||||
lists = config["vars"].get("lists", {})
|
||||
names = list(lists.keys())
|
||||
values = list(lists.values())
|
||||
combs = list(product(*values))
|
||||
|
||||
allnames = names + problem['names']
|
||||
allvalues = [(list(i[0])+list(i[1])) for i in product(combs, samples)]
|
||||
allnames = names + problem["names"]
|
||||
allvalues = [(list(i[0]) + list(i[1])) for i in product(combs, samples)]
|
||||
params = list(map(lambda x: dict(zip(allnames, x)), allvalues))
|
||||
return params
|
||||
|
||||
|
||||
def load_files(*patterns, **kwargs):
|
||||
for pattern in patterns:
|
||||
for i in glob(pattern, **kwargs):
|
||||
for i in glob(pattern, **kwargs, recursive=True):
|
||||
for cfg in load_file(i):
|
||||
path = os.path.abspath(i)
|
||||
yield Config.from_raw(cfg), path
|
||||
@@ -136,22 +102,24 @@ def load_config(cfg):
|
||||
yield from load_files(cfg)
|
||||
|
||||
|
||||
builtins = importlib.import_module('builtins')
|
||||
builtins = importlib.import_module("builtins")
|
||||
|
||||
KNOWN_MODULES = ['soil', ]
|
||||
KNOWN_MODULES = [
|
||||
"soil",
|
||||
]
|
||||
|
||||
|
||||
def name(value, known_modules=KNOWN_MODULES):
|
||||
'''Return a name that can be imported, to serialize/deserialize an object'''
|
||||
"""Return a name that can be imported, to serialize/deserialize an object"""
|
||||
if value is None:
|
||||
return 'None'
|
||||
return "None"
|
||||
if not isinstance(value, type): # Get the class name first
|
||||
value = type(value)
|
||||
tname = value.__name__
|
||||
if hasattr(builtins, tname):
|
||||
return tname
|
||||
modname = value.__module__
|
||||
if modname == '__main__':
|
||||
if modname == "__main__":
|
||||
return tname
|
||||
if known_modules and modname in known_modules:
|
||||
return tname
|
||||
@@ -161,17 +129,17 @@ def name(value, known_modules=KNOWN_MODULES):
|
||||
module = importlib.import_module(kmod)
|
||||
if hasattr(module, tname):
|
||||
return tname
|
||||
return '{}.{}'.format(modname, tname)
|
||||
return "{}.{}".format(modname, tname)
|
||||
|
||||
|
||||
def serializer(type_):
|
||||
if type_ != 'str' and hasattr(builtins, type_):
|
||||
if type_ != "str" and hasattr(builtins, type_):
|
||||
return repr
|
||||
return lambda x: x
|
||||
|
||||
|
||||
def serialize(v, known_modules=KNOWN_MODULES):
|
||||
'''Get a text representation of an object.'''
|
||||
"""Get a text representation of an object."""
|
||||
tname = name(v, known_modules=known_modules)
|
||||
func = serializer(tname)
|
||||
return func(v), tname
|
||||
@@ -196,9 +164,9 @@ IS_CLASS = re.compile(r"<class '(.*)'>")
|
||||
def deserializer(type_, known_modules=KNOWN_MODULES):
|
||||
if type(type_) != str: # Already deserialized
|
||||
return type_
|
||||
if type_ == 'str':
|
||||
return lambda x='': x
|
||||
if type_ == 'None':
|
||||
if type_ == "str":
|
||||
return lambda x="": x
|
||||
if type_ == "None":
|
||||
return lambda x=None: None
|
||||
if hasattr(builtins, type_): # Check if it's a builtin type
|
||||
cls = getattr(builtins, type_)
|
||||
@@ -208,7 +176,7 @@ def deserializer(type_, known_modules=KNOWN_MODULES):
|
||||
modname, tname = match.group(1).rsplit(".", 1)
|
||||
module = importlib.import_module(modname)
|
||||
cls = getattr(module, tname)
|
||||
return getattr(cls, 'deserialize', cls)
|
||||
return getattr(cls, "deserialize", cls)
|
||||
|
||||
# Otherwise, see if we can find the module and the class
|
||||
options = []
|
||||
@@ -217,7 +185,7 @@ def deserializer(type_, known_modules=KNOWN_MODULES):
|
||||
if mod:
|
||||
options.append((mod, type_))
|
||||
|
||||
if '.' in type_: # Fully qualified module
|
||||
if "." in type_: # Fully qualified module
|
||||
module, type_ = type_.rsplit(".", 1)
|
||||
options.append((module, type_))
|
||||
|
||||
@@ -226,27 +194,37 @@ def deserializer(type_, known_modules=KNOWN_MODULES):
|
||||
try:
|
||||
module = importlib.import_module(modname)
|
||||
cls = getattr(module, tname)
|
||||
return getattr(cls, 'deserialize', cls)
|
||||
return getattr(cls, "deserialize", cls)
|
||||
except (ImportError, AttributeError) as ex:
|
||||
errors.append((modname, tname, ex))
|
||||
raise Exception('Could not find type {}. Tried: {}'.format(type_, errors))
|
||||
raise ValueError('Could not find type "{}". Tried: {}'.format(type_, errors))
|
||||
|
||||
|
||||
def deserialize(type_, value=None, **kwargs):
|
||||
'''Get an object from a text representation'''
|
||||
def deserialize(type_, value=None, globs=None, **kwargs):
|
||||
"""Get an object from a text representation"""
|
||||
if not isinstance(type_, str):
|
||||
return type_
|
||||
des = deserializer(type_, **kwargs)
|
||||
if globs and type_ in globs:
|
||||
des = globs[type_]
|
||||
else:
|
||||
try:
|
||||
des = deserializer(type_, **kwargs)
|
||||
except ValueError as ex:
|
||||
try:
|
||||
des = eval(type_)
|
||||
except Exception:
|
||||
raise ex
|
||||
if value is None:
|
||||
return des
|
||||
return des(value)
|
||||
|
||||
|
||||
def deserialize_all(names, *args, known_modules=KNOWN_MODULES, **kwargs):
|
||||
'''Return the list of deserialized objects'''
|
||||
"""Return the list of deserialized objects"""
|
||||
# TODO: remove
|
||||
print("SERIALIZATION", kwargs)
|
||||
objects = []
|
||||
for name in names:
|
||||
mod = deserialize(name, known_modules=known_modules)
|
||||
objects.append(mod(*args, **kwargs))
|
||||
return objects
|
||||
|
||||
|
||||
@@ -11,22 +11,20 @@ import networkx as nx
|
||||
from textwrap import dedent
|
||||
|
||||
from dataclasses import dataclass, field, asdict
|
||||
from typing import Any, Dict, Union, Optional
|
||||
from typing import Any, Dict, Union, Optional, List
|
||||
|
||||
|
||||
from networkx.readwrite import json_graph
|
||||
from functools import partial
|
||||
import pickle
|
||||
|
||||
from . import serialization, utils, basestring, agents
|
||||
from . import serialization, exporters, utils, basestring, agents
|
||||
from .environment import Environment
|
||||
from .utils import logger, run_and_return_exceptions
|
||||
from .exporters import default
|
||||
from .time import INFINITY
|
||||
from .config import Config, convert_old
|
||||
|
||||
|
||||
#TODO: change documentation for simulation
|
||||
# TODO: change documentation for simulation
|
||||
@dataclass
|
||||
class Simulation:
|
||||
"""
|
||||
@@ -35,74 +33,105 @@ class Simulation:
|
||||
config (optional): :class:`config.Config`
|
||||
name of the Simulation
|
||||
|
||||
kwargs: parameters to use to initialize a new configuration, if one has not been provided.
|
||||
kwargs: parameters to use to initialize a new configuration, if one not been provided.
|
||||
"""
|
||||
version: str = '2'
|
||||
name: str = 'Unnamed simulation'
|
||||
description: Optional[str] = ''
|
||||
|
||||
version: str = "2"
|
||||
name: str = "Unnamed simulation"
|
||||
description: Optional[str] = ""
|
||||
group: str = None
|
||||
model_class: Union[str, type] = 'soil.Environment'
|
||||
model_class: Union[str, type] = "soil.Environment"
|
||||
model_params: dict = field(default_factory=dict)
|
||||
seed: str = field(default_factory=lambda: current_time())
|
||||
dir_path: str = field(default_factory=lambda: os.getcwd())
|
||||
max_time: float = float('inf')
|
||||
max_time: float = float("inf")
|
||||
max_steps: int = -1
|
||||
interval: int = 1
|
||||
num_trials: int = 3
|
||||
num_trials: int = 1
|
||||
parallel: Optional[bool] = None
|
||||
exporters: Optional[List[str]] = field(default_factory=list)
|
||||
outdir: Optional[str] = None
|
||||
exporter_params: Optional[Dict[str, Any]] = field(default_factory=dict)
|
||||
dry_run: bool = False
|
||||
extra: Dict[str, Any] = field(default_factory=dict)
|
||||
|
||||
@classmethod
|
||||
def from_dict(cls, env):
|
||||
def from_dict(cls, env, **kwargs):
|
||||
|
||||
ignored = {k: v for k, v in env.items()
|
||||
if k not in inspect.signature(cls).parameters}
|
||||
ignored = {
|
||||
k: v for k, v in env.items() if k not in inspect.signature(cls).parameters
|
||||
}
|
||||
|
||||
kwargs = {k:v for k, v in env.items() if k not in ignored}
|
||||
d = {k: v for k, v in env.items() if k not in ignored}
|
||||
if ignored:
|
||||
kwargs.setdefault('extra', {}).update(ignored)
|
||||
d.setdefault("extra", {}).update(ignored)
|
||||
if ignored:
|
||||
print(f'Warning: Ignoring these parameters (added to "extra"): { ignored }')
|
||||
d.update(kwargs)
|
||||
|
||||
return cls(**kwargs)
|
||||
return cls(**d)
|
||||
|
||||
def run_simulation(self, *args, **kwargs):
|
||||
return self.run(*args, **kwargs)
|
||||
|
||||
def run(self, *args, **kwargs):
|
||||
'''Run the simulation and return the list of resulting environments'''
|
||||
logger.info(dedent('''
|
||||
"""Run the simulation and return the list of resulting environments"""
|
||||
logger.info(
|
||||
dedent(
|
||||
"""
|
||||
Simulation:
|
||||
---
|
||||
''') +
|
||||
self.to_yaml())
|
||||
"""
|
||||
)
|
||||
+ self.to_yaml()
|
||||
)
|
||||
return list(self.run_gen(*args, **kwargs))
|
||||
|
||||
def run_gen(self, parallel=False, dry_run=False,
|
||||
exporters=[default, ], outdir=None, exporter_params={},
|
||||
log_level=None,
|
||||
**kwargs):
|
||||
'''Run the simulation and yield the resulting environments.'''
|
||||
def run_gen(
|
||||
self,
|
||||
parallel=False,
|
||||
dry_run=None,
|
||||
exporters=None,
|
||||
outdir=None,
|
||||
exporter_params={},
|
||||
log_level=None,
|
||||
**kwargs,
|
||||
):
|
||||
"""Run the simulation and yield the resulting environments."""
|
||||
if log_level:
|
||||
logger.setLevel(log_level)
|
||||
logger.info('Using exporters: %s', exporters or [])
|
||||
logger.info('Output directory: %s', outdir)
|
||||
exporters = serialization.deserialize_all(exporters,
|
||||
simulation=self,
|
||||
known_modules=['soil.exporters', ],
|
||||
dry_run=dry_run,
|
||||
outdir=outdir,
|
||||
**exporter_params)
|
||||
outdir = outdir or self.outdir
|
||||
logger.info("Using exporters: %s", exporters or [])
|
||||
logger.info("Output directory: %s", outdir)
|
||||
if dry_run is None:
|
||||
dry_run = self.dry_run
|
||||
if exporters is None:
|
||||
exporters = self.exporters
|
||||
if not exporter_params:
|
||||
exporter_params = self.exporter_params
|
||||
|
||||
with utils.timer('simulation {}'.format(self.name)):
|
||||
exporters = serialization.deserialize_all(
|
||||
exporters,
|
||||
simulation=self,
|
||||
known_modules=[
|
||||
"soil.exporters",
|
||||
],
|
||||
dry_run=dry_run,
|
||||
outdir=outdir,
|
||||
**exporter_params,
|
||||
)
|
||||
|
||||
with utils.timer("simulation {}".format(self.name)):
|
||||
for exporter in exporters:
|
||||
exporter.sim_start()
|
||||
|
||||
for env in utils.run_parallel(func=self.run_trial,
|
||||
iterable=range(int(self.num_trials)),
|
||||
parallel=parallel,
|
||||
log_level=log_level,
|
||||
**kwargs):
|
||||
for env in utils.run_parallel(
|
||||
func=self.run_trial,
|
||||
iterable=range(int(self.num_trials)),
|
||||
parallel=parallel,
|
||||
log_level=log_level,
|
||||
**kwargs,
|
||||
):
|
||||
|
||||
for exporter in exporters:
|
||||
exporter.trial_start(env)
|
||||
@@ -115,28 +144,36 @@ class Simulation:
|
||||
for exporter in exporters:
|
||||
exporter.sim_end()
|
||||
|
||||
def get_env(self, trial_id=0, **kwargs):
|
||||
'''Create an environment for a trial of the simulation'''
|
||||
def get_env(self, trial_id=0, model_params=None, **kwargs):
|
||||
"""Create an environment for a trial of the simulation"""
|
||||
|
||||
def deserialize_reporters(reporters):
|
||||
for (k, v) in reporters.items():
|
||||
if isinstance(v, str) and v.startswith('py:'):
|
||||
reporters[k] = serialization.deserialize(value.lsplit(':', 1)[1])
|
||||
if isinstance(v, str) and v.startswith("py:"):
|
||||
reporters[k] = serialization.deserialize(v.split(":", 1)[1])
|
||||
return reporters
|
||||
|
||||
model_params = self.model_params.copy()
|
||||
model_params.update(kwargs)
|
||||
params = self.model_params.copy()
|
||||
if model_params:
|
||||
params.update(model_params)
|
||||
params.update(kwargs)
|
||||
|
||||
agent_reporters = deserialize_reporters(model_params.pop('agent_reporters', {}))
|
||||
model_reporters = deserialize_reporters(model_params.pop('model_reporters', {}))
|
||||
agent_reporters = deserialize_reporters(params.pop("agent_reporters", {}))
|
||||
model_reporters = deserialize_reporters(params.pop("model_reporters", {}))
|
||||
|
||||
env = serialization.deserialize(self.model_class)
|
||||
return env(id=f'{self.name}_trial_{trial_id}',
|
||||
seed=f'{self.seed}_trial_{trial_id}',
|
||||
dir_path=self.dir_path,
|
||||
agent_reporters=agent_reporters,
|
||||
model_reporters=model_reporters,
|
||||
**model_params)
|
||||
env = serialization.deserialize(self.model_class)
|
||||
return env(
|
||||
id=f"{self.name}_trial_{trial_id}",
|
||||
seed=f"{self.seed}_trial_{trial_id}",
|
||||
dir_path=self.dir_path,
|
||||
agent_reporters=agent_reporters,
|
||||
model_reporters=model_reporters,
|
||||
**params,
|
||||
)
|
||||
|
||||
def run_trial(self, trial_id=None, until=None, log_file=False, log_level=logging.INFO, **opts):
|
||||
def run_trial(
|
||||
self, trial_id=None, until=None, log_file=False, log_level=logging.INFO, **opts
|
||||
):
|
||||
"""
|
||||
Run a single trial of the simulation
|
||||
|
||||
@@ -145,73 +182,83 @@ class Simulation:
|
||||
logger.setLevel(log_level)
|
||||
model = self.get_env(trial_id, **opts)
|
||||
trial_id = trial_id if trial_id is not None else current_time()
|
||||
with utils.timer('Simulation {} trial {}'.format(self.name, trial_id)):
|
||||
return self.run_model(model=model, trial_id=trial_id, until=until, log_level=log_level)
|
||||
with utils.timer("Simulation {} trial {}".format(self.name, trial_id)):
|
||||
return self.run_model(
|
||||
model=model, trial_id=trial_id, until=until, log_level=log_level
|
||||
)
|
||||
|
||||
def run_model(self, model, until=None, **opts):
|
||||
# Set-up trial environment and graph
|
||||
until = float(until or self.max_time or 'inf')
|
||||
until = float(until or self.max_time or "inf")
|
||||
|
||||
# Set up agents on nodes
|
||||
def is_done():
|
||||
return False
|
||||
return not model.running
|
||||
|
||||
if until and hasattr(model.schedule, 'time'):
|
||||
if until and hasattr(model.schedule, "time"):
|
||||
prev = is_done
|
||||
|
||||
def is_done():
|
||||
return prev() or model.schedule.time >= until
|
||||
|
||||
if self.max_steps and self.max_steps > 0 and hasattr(model.schedule, 'steps'):
|
||||
if self.max_steps and self.max_steps > 0 and hasattr(model.schedule, "steps"):
|
||||
prev_steps = is_done
|
||||
|
||||
def is_done():
|
||||
return prev_steps() or model.schedule.steps >= self.max_steps
|
||||
|
||||
newline = '\n'
|
||||
logger.info(dedent(f'''
|
||||
newline = "\n"
|
||||
logger.info(
|
||||
dedent(
|
||||
f"""
|
||||
Model stats:
|
||||
Agents (total: { model.schedule.get_agent_count() }):
|
||||
- { (newline + ' - ').join(str(a) for a in model.schedule.agents) }'''
|
||||
f'''
|
||||
- { (newline + ' - ').join(str(a) for a in model.schedule.agents) }
|
||||
|
||||
Topologies (size):
|
||||
- { dict( (k, len(v)) for (k, v) in model.topologies.items()) }
|
||||
''' if getattr(model, "topologies", None) else ''
|
||||
))
|
||||
Topology size: { len(model.G) if hasattr(model, "G") else 0 }
|
||||
"""
|
||||
)
|
||||
)
|
||||
|
||||
while not is_done():
|
||||
utils.logger.debug(f'Simulation time {model.schedule.time}/{until}. Next: {getattr(model.schedule, "next_time", model.schedule.time + self.interval)}')
|
||||
utils.logger.debug(
|
||||
f'Simulation time {model.schedule.time}/{until}. Next: {getattr(model.schedule, "next_time", model.schedule.time + self.interval)}'
|
||||
)
|
||||
model.step()
|
||||
|
||||
if (
|
||||
model.schedule.time < until
|
||||
): # Simulation ended (no more steps) before the expected time
|
||||
model.schedule.time = until
|
||||
return model
|
||||
|
||||
def to_dict(self):
|
||||
d = asdict(self)
|
||||
if not isinstance(d['model_class'], str):
|
||||
d['model_class'] = serialization.name(d['model_class'])
|
||||
d['model_params'] = serialization.serialize_dict(d['model_params'])
|
||||
d['dir_path'] = str(d['dir_path'])
|
||||
d['version'] = '2'
|
||||
if not isinstance(d["model_class"], str):
|
||||
d["model_class"] = serialization.name(d["model_class"])
|
||||
d["model_params"] = serialization.serialize_dict(d["model_params"])
|
||||
d["dir_path"] = str(d["dir_path"])
|
||||
d["version"] = "2"
|
||||
return d
|
||||
|
||||
def to_yaml(self):
|
||||
return yaml.dump(self.to_dict())
|
||||
|
||||
|
||||
def iter_from_config(*cfgs):
|
||||
def iter_from_config(*cfgs, **kwargs):
|
||||
for config in cfgs:
|
||||
configs = list(serialization.load_config(config))
|
||||
for config, path in configs:
|
||||
d = dict(config)
|
||||
if 'dir_path' not in d:
|
||||
d['dir_path'] = os.path.dirname(path)
|
||||
yield Simulation.from_dict(d)
|
||||
if "dir_path" not in d:
|
||||
d["dir_path"] = os.path.dirname(path)
|
||||
yield Simulation.from_dict(d, **kwargs)
|
||||
|
||||
|
||||
def from_config(conf_or_path):
|
||||
lst = list(iter_from_config(conf_or_path))
|
||||
if len(lst) > 1:
|
||||
raise AttributeError('Provide only one configuration')
|
||||
raise AttributeError("Provide only one configuration")
|
||||
return lst[0]
|
||||
|
||||
|
||||
|
||||
174
soil/time.py
174
soil/time.py
@@ -2,11 +2,20 @@ from mesa.time import BaseScheduler
|
||||
from queue import Empty
|
||||
from heapq import heappush, heappop, heapify
|
||||
import math
|
||||
|
||||
from inspect import getsource
|
||||
from numbers import Number
|
||||
|
||||
from .utils import logger
|
||||
from mesa import Agent as MesaAgent
|
||||
|
||||
|
||||
INFINITY = float('inf')
|
||||
INFINITY = float("inf")
|
||||
|
||||
|
||||
class DeadAgent(Exception):
|
||||
pass
|
||||
|
||||
|
||||
class When:
|
||||
def __init__(self, time):
|
||||
@@ -14,9 +23,66 @@ class When:
|
||||
return time
|
||||
self._time = time
|
||||
|
||||
def abs(self, time):
|
||||
def next(self, time):
|
||||
return self._time
|
||||
|
||||
def abs(self, time):
|
||||
return self
|
||||
|
||||
def __repr__(self):
|
||||
return str(f"When({self._time})")
|
||||
|
||||
def __lt__(self, other):
|
||||
if isinstance(other, Number):
|
||||
return self._time < other
|
||||
return self._time < other.next(self._time)
|
||||
|
||||
def __gt__(self, other):
|
||||
if isinstance(other, Number):
|
||||
return self._time > other
|
||||
return self._time > other.next(self._time)
|
||||
|
||||
def ready(self, agent):
|
||||
return self._time <= agent.model.schedule.time
|
||||
|
||||
def return_value(self, agent):
|
||||
return None
|
||||
|
||||
|
||||
class Cond(When):
|
||||
def __init__(self, func, delta=1, return_func=lambda agent: None):
|
||||
self._func = func
|
||||
self._delta = delta
|
||||
self._checked = False
|
||||
self._return_func = return_func
|
||||
|
||||
def next(self, time):
|
||||
if self._checked:
|
||||
return time + self._delta
|
||||
return time
|
||||
|
||||
def abs(self, time):
|
||||
return self
|
||||
|
||||
def ready(self, agent):
|
||||
self._checked = True
|
||||
return self._func(agent)
|
||||
|
||||
def return_value(self, agent):
|
||||
return self._return_func(agent)
|
||||
|
||||
def __eq__(self, other):
|
||||
return False
|
||||
|
||||
def __lt__(self, other):
|
||||
return True
|
||||
|
||||
def __gt__(self, other):
|
||||
return False
|
||||
|
||||
def __repr__(self):
|
||||
return str(f'Cond("{getsource(self._func)}")')
|
||||
|
||||
|
||||
NEVER = When(INFINITY)
|
||||
|
||||
@@ -26,11 +92,19 @@ class Delta(When):
|
||||
self._delta = delta
|
||||
|
||||
def __eq__(self, other):
|
||||
return self._delta == other._delta
|
||||
if isinstance(other, Delta):
|
||||
return self._delta == other._delta
|
||||
return False
|
||||
|
||||
def abs(self, time):
|
||||
return When(self._delta + time)
|
||||
|
||||
def next(self, time):
|
||||
return time + self._delta
|
||||
|
||||
def __repr__(self):
|
||||
return str(f"Delta({self._delta})")
|
||||
|
||||
|
||||
class TimedActivation(BaseScheduler):
|
||||
"""A scheduler which activates each agent when the agent requests.
|
||||
@@ -42,18 +116,21 @@ class TimedActivation(BaseScheduler):
|
||||
self._next = {}
|
||||
self._queue = []
|
||||
self.next_time = 0
|
||||
self.logger = logger.getChild(f'time_{ self.model }')
|
||||
self.logger = logger.getChild(f"time_{ self.model }")
|
||||
|
||||
def add(self, agent: MesaAgent, when=None):
|
||||
if when is None:
|
||||
when = self.time
|
||||
when = When(self.time)
|
||||
elif not isinstance(when, When):
|
||||
when = When(when)
|
||||
if agent.unique_id in self._agents:
|
||||
self._queue.remove((self._next[agent.unique_id], agent.unique_id))
|
||||
del self._agents[agent.unique_id]
|
||||
heapify(self._queue)
|
||||
if agent.unique_id in self._next:
|
||||
self._queue.remove((self._next[agent.unique_id], agent))
|
||||
heapify(self._queue)
|
||||
|
||||
heappush(self._queue, (when, agent.unique_id))
|
||||
self._next[agent.unique_id] = when
|
||||
heappush(self._queue, (when, agent))
|
||||
super().add(agent)
|
||||
|
||||
def step(self) -> None:
|
||||
@@ -62,38 +139,77 @@ class TimedActivation(BaseScheduler):
|
||||
an agent will signal when it wants to be scheduled next.
|
||||
"""
|
||||
|
||||
self.logger.debug(f'Simulation step {self.next_time}')
|
||||
self.logger.debug(f"Simulation step {self.time}")
|
||||
if not self.model.running:
|
||||
return
|
||||
|
||||
self.time = self.next_time
|
||||
when = self.time
|
||||
when = NEVER
|
||||
|
||||
while self._queue and self._queue[0][0] == self.time:
|
||||
(when, agent_id) = heappop(self._queue)
|
||||
self.logger.debug(f'Stepping agent {agent_id}')
|
||||
to_process = []
|
||||
skipped = []
|
||||
next_time = INFINITY
|
||||
|
||||
agent = self._agents[agent_id]
|
||||
returned = agent.step()
|
||||
ix = 0
|
||||
|
||||
if not agent.alive:
|
||||
self.remove(agent)
|
||||
self.logger.debug(f"Queue length: {len(self._queue)}")
|
||||
|
||||
while self._queue:
|
||||
(when, agent) = self._queue[0]
|
||||
if when > self.time:
|
||||
break
|
||||
heappop(self._queue)
|
||||
if when.ready(agent):
|
||||
try:
|
||||
agent._last_return = when.return_value(agent)
|
||||
except Exception as ex:
|
||||
agent._last_except = ex
|
||||
|
||||
self._next.pop(agent.unique_id, None)
|
||||
to_process.append(agent)
|
||||
continue
|
||||
|
||||
when = (returned or Delta(1)).abs(self.time)
|
||||
if when < self.time:
|
||||
raise Exception("Cannot schedule an agent for a time in the past ({} < {})".format(when, self.time))
|
||||
next_time = min(next_time, when.next(self.time))
|
||||
self._next[agent.unique_id] = next_time
|
||||
skipped.append((when, agent))
|
||||
|
||||
self._next[agent_id] = when
|
||||
heappush(self._queue, (when, agent_id))
|
||||
if self._queue:
|
||||
next_time = min(next_time, self._queue[0][0].next(self.time))
|
||||
|
||||
self._queue = [*skipped, *self._queue]
|
||||
|
||||
for agent in to_process:
|
||||
self.logger.debug(f"Stepping agent {agent}")
|
||||
|
||||
try:
|
||||
returned = ((agent.step() or Delta(1))).abs(self.time)
|
||||
except DeadAgent:
|
||||
if agent.unique_id in self._next:
|
||||
del self._next[agent.unique_id]
|
||||
agent.alive = False
|
||||
continue
|
||||
|
||||
if not getattr(agent, "alive", True):
|
||||
continue
|
||||
|
||||
value = returned.next(self.time)
|
||||
agent._last_return = value
|
||||
|
||||
if value < self.time:
|
||||
raise Exception(
|
||||
f"Cannot schedule an agent for a time in the past ({when} < {self.time})"
|
||||
)
|
||||
if value < INFINITY:
|
||||
next_time = min(value, next_time)
|
||||
|
||||
self._next[agent.unique_id] = returned
|
||||
heappush(self._queue, (returned, agent))
|
||||
else:
|
||||
assert not self._next[agent.unique_id]
|
||||
|
||||
self.steps += 1
|
||||
self.logger.debug(f"Updating time step: {self.time} -> {next_time}")
|
||||
self.time = next_time
|
||||
|
||||
if not self._queue:
|
||||
self.time = INFINITY
|
||||
self.next_time = INFINITY
|
||||
if not self._queue or next_time == INFINITY:
|
||||
self.model.running = False
|
||||
return self.time
|
||||
|
||||
self.next_time = self._queue[0][0]
|
||||
self.logger.debug(f'Next step: {self.next_time}')
|
||||
|
||||
@@ -4,57 +4,75 @@ import os
|
||||
import traceback
|
||||
|
||||
from functools import partial
|
||||
from shutil import copyfile
|
||||
from shutil import copyfile, move
|
||||
from multiprocessing import Pool
|
||||
|
||||
from contextlib import contextmanager
|
||||
|
||||
logger = logging.getLogger('soil')
|
||||
logger = logging.getLogger("soil")
|
||||
logger.setLevel(logging.INFO)
|
||||
|
||||
timeformat = "%H:%M:%S"
|
||||
|
||||
if os.environ.get('SOIL_VERBOSE', ''):
|
||||
if os.environ.get("SOIL_VERBOSE", ""):
|
||||
logformat = "[%(levelname)-5.5s][%(asctime)s][%(name)s]: %(message)s"
|
||||
else:
|
||||
logformat = "[%(levelname)-5.5s][%(asctime)s] %(message)s"
|
||||
|
||||
logFormatter = logging.Formatter(logformat, timeformat)
|
||||
|
||||
consoleHandler = logging.StreamHandler()
|
||||
consoleHandler.setFormatter(logFormatter)
|
||||
logger.addHandler(consoleHandler)
|
||||
|
||||
logging.basicConfig(
|
||||
level=logging.INFO,
|
||||
handlers=[
|
||||
consoleHandler,
|
||||
],
|
||||
)
|
||||
|
||||
|
||||
@contextmanager
|
||||
def timer(name='task', pre="", function=logger.info, to_object=None):
|
||||
def timer(name="task", pre="", function=logger.info, to_object=None):
|
||||
start = current_time()
|
||||
function('{}Starting {} at {}.'.format(pre, name,
|
||||
strftime("%X", gmtime(start))))
|
||||
function("{}Starting {} at {}.".format(pre, name, strftime("%X", gmtime(start))))
|
||||
yield start
|
||||
end = current_time()
|
||||
function('{}Finished {} at {} in {} seconds'.format(pre, name,
|
||||
strftime("%X", gmtime(end)),
|
||||
str(end-start)))
|
||||
function(
|
||||
"{}Finished {} at {} in {} seconds".format(
|
||||
pre, name, strftime("%X", gmtime(end)), str(end - start)
|
||||
)
|
||||
)
|
||||
if to_object:
|
||||
to_object.start = start
|
||||
to_object.end = end
|
||||
|
||||
|
||||
def safe_open(path, mode='r', backup=True, **kwargs):
|
||||
def try_backup(path, remove=False):
|
||||
if not os.path.exists(path):
|
||||
return None
|
||||
outdir = os.path.dirname(path)
|
||||
if outdir and not os.path.exists(outdir):
|
||||
os.makedirs(outdir)
|
||||
if backup and 'w' in mode and os.path.exists(path):
|
||||
creation = os.path.getctime(path)
|
||||
stamp = strftime('%Y-%m-%d_%H.%M.%S', localtime(creation))
|
||||
creation = os.path.getctime(path)
|
||||
stamp = strftime("%Y-%m-%d_%H.%M.%S", localtime(creation))
|
||||
|
||||
backup_dir = os.path.join(outdir, 'backup')
|
||||
if not os.path.exists(backup_dir):
|
||||
os.makedirs(backup_dir)
|
||||
newpath = os.path.join(backup_dir, '{}@{}'.format(os.path.basename(path),
|
||||
stamp))
|
||||
backup_dir = os.path.join(outdir, "backup")
|
||||
if not os.path.exists(backup_dir):
|
||||
os.makedirs(backup_dir)
|
||||
newpath = os.path.join(backup_dir, "{}@{}".format(os.path.basename(path), stamp))
|
||||
if move:
|
||||
move(path, newpath)
|
||||
else:
|
||||
copyfile(path, newpath)
|
||||
return newpath
|
||||
|
||||
|
||||
def safe_open(path, mode="r", backup=True, **kwargs):
|
||||
outdir = os.path.dirname(path)
|
||||
if outdir and not os.path.exists(outdir):
|
||||
os.makedirs(outdir)
|
||||
if backup and "w" in mode:
|
||||
try_backup(path)
|
||||
return open(path, mode=mode, **kwargs)
|
||||
|
||||
|
||||
@@ -63,24 +81,26 @@ def open_or_reuse(f, *args, **kwargs):
|
||||
try:
|
||||
with safe_open(f, *args, **kwargs) as f:
|
||||
yield f
|
||||
except (AttributeError, TypeError):
|
||||
except (AttributeError, TypeError) as ex:
|
||||
yield f
|
||||
|
||||
|
||||
def flatten_dict(d):
|
||||
if not isinstance(d, dict):
|
||||
return d
|
||||
return dict(_flatten_dict(d))
|
||||
|
||||
def _flatten_dict(d, prefix=''):
|
||||
|
||||
def _flatten_dict(d, prefix=""):
|
||||
if not isinstance(d, dict):
|
||||
# print('END:', prefix, d)
|
||||
yield prefix, d
|
||||
return
|
||||
if prefix:
|
||||
prefix = prefix + '.'
|
||||
prefix = prefix + "."
|
||||
for k, v in d.items():
|
||||
# print(k, v)
|
||||
res = list(_flatten_dict(v, prefix='{}{}'.format(prefix, k)))
|
||||
res = list(_flatten_dict(v, prefix="{}{}".format(prefix, k)))
|
||||
# print('RES:', res)
|
||||
yield from res
|
||||
|
||||
@@ -92,7 +112,7 @@ def unflatten_dict(d):
|
||||
if not isinstance(k, str):
|
||||
target[k] = v
|
||||
continue
|
||||
tokens = k.split('.')
|
||||
tokens = k.split(".")
|
||||
if len(tokens) < 2:
|
||||
target[k] = v
|
||||
continue
|
||||
@@ -105,27 +125,28 @@ def unflatten_dict(d):
|
||||
|
||||
|
||||
def run_and_return_exceptions(func, *args, **kwargs):
|
||||
'''
|
||||
"""
|
||||
A wrapper for run_trial that catches exceptions and returns them.
|
||||
It is meant for async simulations.
|
||||
'''
|
||||
"""
|
||||
try:
|
||||
return func(*args, **kwargs)
|
||||
except Exception as ex:
|
||||
if ex.__cause__ is not None:
|
||||
ex = ex.__cause__
|
||||
ex.message = ''.join(traceback.format_exception(type(ex), ex, ex.__traceback__)[:])
|
||||
ex.message = "".join(
|
||||
traceback.format_exception(type(ex), ex, ex.__traceback__)[:]
|
||||
)
|
||||
return ex
|
||||
|
||||
|
||||
def run_parallel(func, iterable, parallel=False, **kwargs):
|
||||
if parallel and not os.environ.get('SOIL_DEBUG', None):
|
||||
if parallel and not os.environ.get("SOIL_DEBUG", None):
|
||||
p = Pool()
|
||||
wrapped_func = partial(run_and_return_exceptions,
|
||||
func, **kwargs)
|
||||
wrapped_func = partial(run_and_return_exceptions, func, **kwargs)
|
||||
for i in p.imap_unordered(wrapped_func, iterable):
|
||||
if isinstance(i, Exception):
|
||||
logger.error('Trial failed:\n\t%s', i.message)
|
||||
logger.error("Trial failed:\n\t%s", i.message)
|
||||
continue
|
||||
yield i
|
||||
else:
|
||||
|
||||
@@ -4,7 +4,7 @@ import logging
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
ROOT = os.path.dirname(__file__)
|
||||
DEFAULT_FILE = os.path.join(ROOT, 'VERSION')
|
||||
DEFAULT_FILE = os.path.join(ROOT, "VERSION")
|
||||
|
||||
|
||||
def read_version(versionfile=DEFAULT_FILE):
|
||||
@@ -12,9 +12,10 @@ def read_version(versionfile=DEFAULT_FILE):
|
||||
with open(versionfile) as f:
|
||||
return f.read().strip()
|
||||
except IOError: # pragma: no cover
|
||||
logger.error(('Running an unknown version of {}.'
|
||||
'Be careful!.').format(__name__))
|
||||
return '0.0'
|
||||
logger.error(
|
||||
("Running an unknown version of {}." "Be careful!.").format(__name__)
|
||||
)
|
||||
return "0.0"
|
||||
|
||||
|
||||
__version__ = read_version()
|
||||
|
||||
@@ -1,5 +1,6 @@
|
||||
from mesa.visualization.UserParam import UserSettableParameter
|
||||
|
||||
|
||||
class UserSettableParameter(UserSettableParameter):
|
||||
def __str__(self):
|
||||
return self.value
|
||||
|
||||
@@ -20,6 +20,7 @@ from tornado.concurrent import run_on_executor
|
||||
from concurrent.futures import ThreadPoolExecutor
|
||||
|
||||
from ..simulation import Simulation
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
logger.setLevel(logging.INFO)
|
||||
|
||||
@@ -31,140 +32,183 @@ LOGGING_INTERVAL = 0.5
|
||||
# Workaround to let Soil load the required modules
|
||||
sys.path.append(ROOT)
|
||||
|
||||
|
||||
class PageHandler(tornado.web.RequestHandler):
|
||||
""" Handler for the HTML template which holds the visualization. """
|
||||
"""Handler for the HTML template which holds the visualization."""
|
||||
|
||||
def get(self):
|
||||
self.render('index.html', port=self.application.port,
|
||||
name=self.application.name)
|
||||
self.render(
|
||||
"index.html", port=self.application.port, name=self.application.name
|
||||
)
|
||||
|
||||
|
||||
class SocketHandler(tornado.websocket.WebSocketHandler):
|
||||
""" Handler for websocket. """
|
||||
"""Handler for websocket."""
|
||||
|
||||
executor = ThreadPoolExecutor(max_workers=MAX_WORKERS)
|
||||
|
||||
def open(self):
|
||||
if self.application.verbose:
|
||||
logger.info('Socket opened!')
|
||||
logger.info("Socket opened!")
|
||||
|
||||
def check_origin(self, origin):
|
||||
return True
|
||||
|
||||
def on_message(self, message):
|
||||
""" Receiving a message from the websocket, parse, and act accordingly. """
|
||||
"""Receiving a message from the websocket, parse, and act accordingly."""
|
||||
|
||||
msg = tornado.escape.json_decode(message)
|
||||
|
||||
if msg['type'] == 'config_file':
|
||||
if msg["type"] == "config_file":
|
||||
|
||||
if self.application.verbose:
|
||||
print(msg['data'])
|
||||
print(msg["data"])
|
||||
|
||||
self.config = list(yaml.load_all(msg['data']))
|
||||
self.config = list(yaml.load_all(msg["data"]))
|
||||
|
||||
if len(self.config) > 1:
|
||||
error = 'Please, provide only one configuration.'
|
||||
error = "Please, provide only one configuration."
|
||||
if self.application.verbose:
|
||||
logger.error(error)
|
||||
self.write_message({'type': 'error',
|
||||
'error': error})
|
||||
self.write_message({"type": "error", "error": error})
|
||||
return
|
||||
|
||||
self.config = self.config[0]
|
||||
self.send_log('INFO.' + self.simulation_name,
|
||||
'Using config: {name}'.format(name=self.config['name']))
|
||||
self.send_log(
|
||||
"INFO." + self.simulation_name,
|
||||
"Using config: {name}".format(name=self.config["name"]),
|
||||
)
|
||||
|
||||
if 'visualization_params' in self.config:
|
||||
self.write_message({'type': 'visualization_params',
|
||||
'data': self.config['visualization_params']})
|
||||
self.name = self.config['name']
|
||||
if "visualization_params" in self.config:
|
||||
self.write_message(
|
||||
{
|
||||
"type": "visualization_params",
|
||||
"data": self.config["visualization_params"],
|
||||
}
|
||||
)
|
||||
self.name = self.config["name"]
|
||||
self.run_simulation()
|
||||
|
||||
settings = []
|
||||
for key in self.config['environment_params']:
|
||||
if type(self.config['environment_params'][key]) == float or type(self.config['environment_params'][key]) == int:
|
||||
if self.config['environment_params'][key] <= 1:
|
||||
setting_type = 'number'
|
||||
for key in self.config["environment_params"]:
|
||||
if (
|
||||
type(self.config["environment_params"][key]) == float
|
||||
or type(self.config["environment_params"][key]) == int
|
||||
):
|
||||
if self.config["environment_params"][key] <= 1:
|
||||
setting_type = "number"
|
||||
else:
|
||||
setting_type = 'great_number'
|
||||
elif type(self.config['environment_params'][key]) == bool:
|
||||
setting_type = 'boolean'
|
||||
setting_type = "great_number"
|
||||
elif type(self.config["environment_params"][key]) == bool:
|
||||
setting_type = "boolean"
|
||||
else:
|
||||
setting_type = 'undefined'
|
||||
setting_type = "undefined"
|
||||
|
||||
settings.append({
|
||||
'label': key,
|
||||
'type': setting_type,
|
||||
'value': self.config['environment_params'][key]
|
||||
})
|
||||
settings.append(
|
||||
{
|
||||
"label": key,
|
||||
"type": setting_type,
|
||||
"value": self.config["environment_params"][key],
|
||||
}
|
||||
)
|
||||
|
||||
self.write_message({'type': 'settings',
|
||||
'data': settings})
|
||||
self.write_message({"type": "settings", "data": settings})
|
||||
|
||||
elif msg['type'] == 'get_trial':
|
||||
elif msg["type"] == "get_trial":
|
||||
if self.application.verbose:
|
||||
logger.info('Trial {} requested!'.format(msg['data']))
|
||||
self.send_log('INFO.' + __name__, 'Trial {} requested!'.format(msg['data']))
|
||||
self.write_message({'type': 'get_trial',
|
||||
'data': self.get_trial(int(msg['data']))})
|
||||
logger.info("Trial {} requested!".format(msg["data"]))
|
||||
self.send_log("INFO." + __name__, "Trial {} requested!".format(msg["data"]))
|
||||
self.write_message(
|
||||
{"type": "get_trial", "data": self.get_trial(int(msg["data"]))}
|
||||
)
|
||||
|
||||
elif msg['type'] == 'run_simulation':
|
||||
elif msg["type"] == "run_simulation":
|
||||
if self.application.verbose:
|
||||
logger.info('Running new simulation for {name}'.format(name=self.config['name']))
|
||||
self.send_log('INFO.' + self.simulation_name, 'Running new simulation for {name}'.format(name=self.config['name']))
|
||||
self.config['environment_params'] = msg['data']
|
||||
logger.info(
|
||||
"Running new simulation for {name}".format(name=self.config["name"])
|
||||
)
|
||||
self.send_log(
|
||||
"INFO." + self.simulation_name,
|
||||
"Running new simulation for {name}".format(name=self.config["name"]),
|
||||
)
|
||||
self.config["environment_params"] = msg["data"]
|
||||
self.run_simulation()
|
||||
|
||||
elif msg['type'] == 'download_gexf':
|
||||
G = self.trials[ int(msg['data']) ].history_to_graph()
|
||||
elif msg["type"] == "download_gexf":
|
||||
G = self.trials[int(msg["data"])].history_to_graph()
|
||||
for node in G.nodes():
|
||||
if 'pos' in G.nodes[node]:
|
||||
G.nodes[node]['viz'] = {"position": {"x": G.nodes[node]['pos'][0], "y": G.nodes[node]['pos'][1], "z": 0.0}}
|
||||
del (G.nodes[node]['pos'])
|
||||
writer = nx.readwrite.gexf.GEXFWriter(version='1.2draft')
|
||||
if "pos" in G.nodes[node]:
|
||||
G.nodes[node]["viz"] = {
|
||||
"position": {
|
||||
"x": G.nodes[node]["pos"][0],
|
||||
"y": G.nodes[node]["pos"][1],
|
||||
"z": 0.0,
|
||||
}
|
||||
}
|
||||
del G.nodes[node]["pos"]
|
||||
writer = nx.readwrite.gexf.GEXFWriter(version="1.2draft")
|
||||
writer.add_graph(G)
|
||||
self.write_message({'type': 'download_gexf',
|
||||
'filename': self.config['name'] + '_trial_' + str(msg['data']),
|
||||
'data': tostring(writer.xml).decode(writer.encoding) })
|
||||
self.write_message(
|
||||
{
|
||||
"type": "download_gexf",
|
||||
"filename": self.config["name"] + "_trial_" + str(msg["data"]),
|
||||
"data": tostring(writer.xml).decode(writer.encoding),
|
||||
}
|
||||
)
|
||||
|
||||
elif msg['type'] == 'download_json':
|
||||
G = self.trials[ int(msg['data']) ].history_to_graph()
|
||||
elif msg["type"] == "download_json":
|
||||
G = self.trials[int(msg["data"])].history_to_graph()
|
||||
for node in G.nodes():
|
||||
if 'pos' in G.nodes[node]:
|
||||
G.nodes[node]['viz'] = {"position": {"x": G.nodes[node]['pos'][0], "y": G.nodes[node]['pos'][1], "z": 0.0}}
|
||||
del (G.nodes[node]['pos'])
|
||||
self.write_message({'type': 'download_json',
|
||||
'filename': self.config['name'] + '_trial_' + str(msg['data']),
|
||||
'data': nx.node_link_data(G) })
|
||||
if "pos" in G.nodes[node]:
|
||||
G.nodes[node]["viz"] = {
|
||||
"position": {
|
||||
"x": G.nodes[node]["pos"][0],
|
||||
"y": G.nodes[node]["pos"][1],
|
||||
"z": 0.0,
|
||||
}
|
||||
}
|
||||
del G.nodes[node]["pos"]
|
||||
self.write_message(
|
||||
{
|
||||
"type": "download_json",
|
||||
"filename": self.config["name"] + "_trial_" + str(msg["data"]),
|
||||
"data": nx.node_link_data(G),
|
||||
}
|
||||
)
|
||||
|
||||
else:
|
||||
if self.application.verbose:
|
||||
logger.info('Unexpected message!')
|
||||
logger.info("Unexpected message!")
|
||||
|
||||
def update_logging(self):
|
||||
try:
|
||||
if (not self.log_capture_string.closed and self.log_capture_string.getvalue()):
|
||||
for i in range(len(self.log_capture_string.getvalue().split('\n')) - 1):
|
||||
self.send_log('INFO.' + self.simulation_name, self.log_capture_string.getvalue().split('\n')[i])
|
||||
if (
|
||||
not self.log_capture_string.closed
|
||||
and self.log_capture_string.getvalue()
|
||||
):
|
||||
for i in range(len(self.log_capture_string.getvalue().split("\n")) - 1):
|
||||
self.send_log(
|
||||
"INFO." + self.simulation_name,
|
||||
self.log_capture_string.getvalue().split("\n")[i],
|
||||
)
|
||||
self.log_capture_string.truncate(0)
|
||||
self.log_capture_string.seek(0)
|
||||
finally:
|
||||
if self.capture_logging:
|
||||
tornado.ioloop.IOLoop.current().call_later(LOGGING_INTERVAL, self.update_logging)
|
||||
|
||||
tornado.ioloop.IOLoop.current().call_later(
|
||||
LOGGING_INTERVAL, self.update_logging
|
||||
)
|
||||
|
||||
def on_close(self):
|
||||
if self.application.verbose:
|
||||
logger.info('Socket closed!')
|
||||
logger.info("Socket closed!")
|
||||
|
||||
def send_log(self, logger, logging):
|
||||
self.write_message({'type': 'log',
|
||||
'logger': logger,
|
||||
'logging': logging})
|
||||
self.write_message({"type": "log", "logger": logger, "logging": logging})
|
||||
|
||||
@property
|
||||
def simulation_name(self):
|
||||
return self.config.get('name', 'NoSimulationRunning')
|
||||
return self.config.get("name", "NoSimulationRunning")
|
||||
|
||||
@run_on_executor
|
||||
def nonblocking(self, config):
|
||||
@@ -174,28 +218,31 @@ class SocketHandler(tornado.websocket.WebSocketHandler):
|
||||
@tornado.gen.coroutine
|
||||
def run_simulation(self):
|
||||
# Run simulation and capture logs
|
||||
logger.info('Running simulation!')
|
||||
if 'visualization_params' in self.config:
|
||||
del self.config['visualization_params']
|
||||
logger.info("Running simulation!")
|
||||
if "visualization_params" in self.config:
|
||||
del self.config["visualization_params"]
|
||||
with self.logging(self.simulation_name):
|
||||
try:
|
||||
config = dict(**self.config)
|
||||
config['outdir'] = os.path.join(self.application.outdir, config['name'])
|
||||
config['dump'] = self.application.dump
|
||||
config["outdir"] = os.path.join(self.application.outdir, config["name"])
|
||||
config["dump"] = self.application.dump
|
||||
self.trials = yield self.nonblocking(config)
|
||||
|
||||
self.write_message({'type': 'trials',
|
||||
'data': list(trial.name for trial in self.trials) })
|
||||
self.write_message(
|
||||
{
|
||||
"type": "trials",
|
||||
"data": list(trial.name for trial in self.trials),
|
||||
}
|
||||
)
|
||||
except Exception as ex:
|
||||
error = 'Something went wrong:\n\t{}'.format(ex)
|
||||
error = "Something went wrong:\n\t{}".format(ex)
|
||||
logging.info(error)
|
||||
self.write_message({'type': 'error',
|
||||
'error': error})
|
||||
self.send_log('ERROR.' + self.simulation_name, error)
|
||||
self.write_message({"type": "error", "error": error})
|
||||
self.send_log("ERROR." + self.simulation_name, error)
|
||||
|
||||
def get_trial(self, trial):
|
||||
logger.info('Available trials: %s ' % len(self.trials))
|
||||
logger.info('Ask for : %s' % trial)
|
||||
logger.info("Available trials: %s " % len(self.trials))
|
||||
logger.info("Ask for : %s" % trial)
|
||||
trial = self.trials[trial]
|
||||
G = trial.history_to_graph()
|
||||
return nx.node_link_data(G)
|
||||
@@ -218,21 +265,24 @@ class SocketHandler(tornado.websocket.WebSocketHandler):
|
||||
|
||||
|
||||
class ModularServer(tornado.web.Application):
|
||||
""" Main visualization application. """
|
||||
"""Main visualization application."""
|
||||
|
||||
port = 8001
|
||||
page_handler = (r'/', PageHandler)
|
||||
socket_handler = (r'/ws', SocketHandler)
|
||||
static_handler = (r'/(.*)', tornado.web.StaticFileHandler,
|
||||
{'path': os.path.join(ROOT, 'static')})
|
||||
local_handler = (r'/local/(.*)', tornado.web.StaticFileHandler,
|
||||
{'path': ''})
|
||||
page_handler = (r"/", PageHandler)
|
||||
socket_handler = (r"/ws", SocketHandler)
|
||||
static_handler = (
|
||||
r"/(.*)",
|
||||
tornado.web.StaticFileHandler,
|
||||
{"path": os.path.join(ROOT, "static")},
|
||||
)
|
||||
local_handler = (r"/local/(.*)", tornado.web.StaticFileHandler, {"path": ""})
|
||||
|
||||
handlers = [page_handler, socket_handler, static_handler, local_handler]
|
||||
settings = {'debug': True,
|
||||
'template_path': ROOT + '/templates'}
|
||||
settings = {"debug": True, "template_path": ROOT + "/templates"}
|
||||
|
||||
def __init__(self, dump=False, outdir='output', name='SOIL', verbose=True, *args, **kwargs):
|
||||
def __init__(
|
||||
self, dump=False, outdir="output", name="SOIL", verbose=True, *args, **kwargs
|
||||
):
|
||||
|
||||
self.verbose = verbose
|
||||
self.name = name
|
||||
@@ -243,12 +293,12 @@ class ModularServer(tornado.web.Application):
|
||||
super().__init__(self.handlers, **self.settings)
|
||||
|
||||
def launch(self, port=None):
|
||||
""" Run the app. """
|
||||
"""Run the app."""
|
||||
|
||||
if port is not None:
|
||||
self.port = port
|
||||
url = 'http://127.0.0.1:{PORT}'.format(PORT=self.port)
|
||||
print('Interface starting at {url}'.format(url=url))
|
||||
url = "http://127.0.0.1:{PORT}".format(PORT=self.port)
|
||||
print("Interface starting at {url}".format(url=url))
|
||||
self.listen(self.port)
|
||||
# webbrowser.open(url)
|
||||
tornado.ioloop.IOLoop.instance().start()
|
||||
@@ -263,12 +313,22 @@ def run(*args, **kwargs):
|
||||
def main():
|
||||
import argparse
|
||||
|
||||
parser = argparse.ArgumentParser(description='Visualization of a Graph Model')
|
||||
parser = argparse.ArgumentParser(description="Visualization of a Graph Model")
|
||||
|
||||
parser.add_argument('--name', '-n', nargs=1, default='SOIL', help='name of the simulation')
|
||||
parser.add_argument('--dump', '-d', help='dumping results in folder output', action='store_true')
|
||||
parser.add_argument('--port', '-p', nargs=1, default=8001, help='port for launching the server')
|
||||
parser.add_argument('--verbose', '-v', help='verbose mode', action='store_true')
|
||||
parser.add_argument(
|
||||
"--name", "-n", nargs=1, default="SOIL", help="name of the simulation"
|
||||
)
|
||||
parser.add_argument(
|
||||
"--dump", "-d", help="dumping results in folder output", action="store_true"
|
||||
)
|
||||
parser.add_argument(
|
||||
"--port", "-p", nargs=1, default=8001, help="port for launching the server"
|
||||
)
|
||||
parser.add_argument("--verbose", "-v", help="verbose mode", action="store_true")
|
||||
args = parser.parse_args()
|
||||
|
||||
run(name=args.name, port=(args.port[0] if isinstance(args.port, list) else args.port), verbose=args.verbose)
|
||||
run(
|
||||
name=args.name,
|
||||
port=(args.port[0] if isinstance(args.port, list) else args.port),
|
||||
verbose=args.verbose,
|
||||
)
|
||||
|
||||
@@ -4,20 +4,33 @@ from simulator import Simulator
|
||||
|
||||
|
||||
def run(simulator, name="SOIL", port=8001, verbose=False):
|
||||
server = ModularServer(simulator, name=(name[0] if isinstance(name, list) else name), verbose=verbose)
|
||||
server = ModularServer(
|
||||
simulator, name=(name[0] if isinstance(name, list) else name), verbose=verbose
|
||||
)
|
||||
server.port = port
|
||||
server.launch()
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
|
||||
parser = argparse.ArgumentParser(description='Visualization of a Graph Model')
|
||||
parser = argparse.ArgumentParser(description="Visualization of a Graph Model")
|
||||
|
||||
parser.add_argument('--name', '-n', nargs=1, default='SOIL', help='name of the simulation')
|
||||
parser.add_argument('--dump', '-d', help='dumping results in folder output', action='store_true')
|
||||
parser.add_argument('--port', '-p', nargs=1, default=8001, help='port for launching the server')
|
||||
parser.add_argument('--verbose', '-v', help='verbose mode', action='store_true')
|
||||
parser.add_argument(
|
||||
"--name", "-n", nargs=1, default="SOIL", help="name of the simulation"
|
||||
)
|
||||
parser.add_argument(
|
||||
"--dump", "-d", help="dumping results in folder output", action="store_true"
|
||||
)
|
||||
parser.add_argument(
|
||||
"--port", "-p", nargs=1, default=8001, help="port for launching the server"
|
||||
)
|
||||
parser.add_argument("--verbose", "-v", help="verbose mode", action="store_true")
|
||||
args = parser.parse_args()
|
||||
|
||||
soil = Simulator(dump=args.dump)
|
||||
run(soil, name=args.name, port=(args.port[0] if isinstance(args.port, list) else args.port), verbose=args.verbose)
|
||||
run(
|
||||
soil,
|
||||
name=args.name,
|
||||
port=(args.port[0] if isinstance(args.port, list) else args.port),
|
||||
verbose=args.verbose,
|
||||
)
|
||||
|
||||
@@ -9,17 +9,16 @@ interval: 1
|
||||
seed: "CompleteSeed!"
|
||||
model_class: Environment
|
||||
model_params:
|
||||
topologies:
|
||||
default:
|
||||
params:
|
||||
generator: complete_graph
|
||||
n: 4
|
||||
topology:
|
||||
params:
|
||||
generator: complete_graph
|
||||
n: 4
|
||||
agents:
|
||||
agent_class: CounterModel
|
||||
state:
|
||||
group: network
|
||||
times: 1
|
||||
topology: 'default'
|
||||
topology: true
|
||||
distribution:
|
||||
- agent_class: CounterModel
|
||||
weight: 0.25
|
||||
@@ -42,7 +41,7 @@ model_params:
|
||||
fixed:
|
||||
- agent_class: BaseAgent
|
||||
hidden: true
|
||||
topology: null
|
||||
topology: false
|
||||
state:
|
||||
name: 'Environment Agent 1'
|
||||
times: 10
|
||||
|
||||
@@ -4,21 +4,66 @@ import pytest
|
||||
from soil import agents, environment
|
||||
from soil import time as stime
|
||||
|
||||
|
||||
class Dead(agents.FSM):
|
||||
@agents.default_state
|
||||
@agents.state
|
||||
def only(self):
|
||||
return self.die()
|
||||
|
||||
class TestMain(TestCase):
|
||||
def test_die_raises_exception(self):
|
||||
d = Dead(unique_id=0, model=environment.Environment())
|
||||
d.step()
|
||||
with pytest.raises(agents.DeadAgent):
|
||||
d.step()
|
||||
|
||||
class TestMain(TestCase):
|
||||
def test_die_returns_infinity(self):
|
||||
'''The last step of a dead agent should return time.INFINITY'''
|
||||
d = Dead(unique_id=0, model=environment.Environment())
|
||||
ret = d.step().abs(0)
|
||||
print(ret, 'next')
|
||||
assert ret == stime.INFINITY
|
||||
print(ret, "next")
|
||||
assert ret == stime.NEVER
|
||||
|
||||
def test_die_raises_exception(self):
|
||||
'''A dead agent should raise an exception if it is stepped after death'''
|
||||
d = Dead(unique_id=0, model=environment.Environment())
|
||||
d.step()
|
||||
with pytest.raises(stime.DeadAgent):
|
||||
d.step()
|
||||
|
||||
|
||||
def test_agent_generator(self):
|
||||
'''
|
||||
The step function of an agent could be a generator. In that case, the state of the
|
||||
agent will be resumed after every call to step.
|
||||
'''
|
||||
a = 0
|
||||
class Gen(agents.BaseAgent):
|
||||
def step(self):
|
||||
nonlocal a
|
||||
for i in range(5):
|
||||
yield
|
||||
a += 1
|
||||
e = environment.Environment()
|
||||
g = Gen(model=e, unique_id=e.next_id())
|
||||
e.schedule.add(g)
|
||||
|
||||
for i in range(5):
|
||||
e.step()
|
||||
assert a == i
|
||||
|
||||
def test_state_decorator(self):
|
||||
class MyAgent(agents.FSM):
|
||||
run = 0
|
||||
@agents.default_state
|
||||
@agents.state('original')
|
||||
def root(self):
|
||||
self.run += 1
|
||||
return self.other
|
||||
|
||||
@agents.state
|
||||
def other(self):
|
||||
self.run += 1
|
||||
|
||||
e = environment.Environment()
|
||||
a = MyAgent(model=e, unique_id=e.next_id())
|
||||
a.step()
|
||||
assert a.run == 1
|
||||
a.step()
|
||||
assert a.run == 2
|
||||
|
||||
@@ -7,9 +7,9 @@ from os.path import join
|
||||
from soil import simulation, serialization, config, network, agents, utils
|
||||
|
||||
ROOT = os.path.abspath(os.path.dirname(__file__))
|
||||
EXAMPLES = join(ROOT, '..', 'examples')
|
||||
EXAMPLES = join(ROOT, "..", "examples")
|
||||
|
||||
FORCE_TESTS = os.environ.get('FORCE_TESTS', '')
|
||||
FORCE_TESTS = os.environ.get("FORCE_TESTS", "")
|
||||
|
||||
|
||||
def isequal(a, b):
|
||||
@@ -24,7 +24,6 @@ def isequal(a, b):
|
||||
|
||||
|
||||
class TestConfig(TestCase):
|
||||
|
||||
def test_conversion(self):
|
||||
expected = serialization.load_file(join(ROOT, "complete_converted.yml"))[0]
|
||||
old = serialization.load_file(join(ROOT, "old_complete.yml"))[0]
|
||||
@@ -38,7 +37,7 @@ class TestConfig(TestCase):
|
||||
The configuration should not change after running
|
||||
the simulation.
|
||||
"""
|
||||
config = serialization.load_file(join(EXAMPLES, 'complete.yml'))[0]
|
||||
config = serialization.load_file(join(EXAMPLES, "complete.yml"))[0]
|
||||
s = simulation.from_config(config)
|
||||
init_config = copy.copy(s.to_dict())
|
||||
|
||||
@@ -47,11 +46,8 @@ class TestConfig(TestCase):
|
||||
# del nconfig['to
|
||||
isequal(init_config, nconfig)
|
||||
|
||||
|
||||
def test_topology_config(self):
|
||||
netconfig = config.NetConfig(**{
|
||||
'path': join(ROOT, 'test.gexf')
|
||||
})
|
||||
netconfig = config.NetConfig(**{"path": join(ROOT, "test.gexf")})
|
||||
net = network.from_config(netconfig, dir_path=ROOT)
|
||||
assert len(net.nodes) == 2
|
||||
assert len(net.edges) == 1
|
||||
@@ -62,36 +58,33 @@ class TestConfig(TestCase):
|
||||
network agents are initialized properly.
|
||||
"""
|
||||
cfg = {
|
||||
'name': 'CounterAgent',
|
||||
'network_params': {
|
||||
'path': join(ROOT, 'test.gexf')
|
||||
},
|
||||
'agent_class': 'CounterModel',
|
||||
"name": "CounterAgent",
|
||||
"network_params": {"path": join(ROOT, "test.gexf")},
|
||||
"agent_class": "CounterModel",
|
||||
# 'states': [{'times': 10}, {'times': 20}],
|
||||
'max_time': 2,
|
||||
'dry_run': True,
|
||||
'num_trials': 1,
|
||||
'environment_params': {
|
||||
}
|
||||
"max_time": 2,
|
||||
"dry_run": True,
|
||||
"num_trials": 1,
|
||||
"environment_params": {},
|
||||
}
|
||||
conf = config.convert_old(cfg)
|
||||
s = simulation.from_config(conf)
|
||||
|
||||
env = s.get_env()
|
||||
assert len(env.topologies['default'].nodes) == 2
|
||||
assert len(env.topologies['default'].edges) == 1
|
||||
assert len(env.G.nodes) == 2
|
||||
assert len(env.G.edges) == 1
|
||||
assert len(env.agents) == 2
|
||||
assert env.agents[0].G == env.topologies['default']
|
||||
assert env.agents[0].G == env.G
|
||||
|
||||
def test_agents_from_config(self):
|
||||
'''We test that the known complete configuration produces
|
||||
the right agents in the right groups'''
|
||||
"""We test that the known complete configuration produces
|
||||
the right agents in the right groups"""
|
||||
cfg = serialization.load_file(join(ROOT, "complete_converted.yml"))[0]
|
||||
s = simulation.from_config(cfg)
|
||||
env = s.get_env()
|
||||
assert len(env.topologies['default'].nodes) == 4
|
||||
assert len(env.agents(group='network')) == 4
|
||||
assert len(env.agents(group='environment')) == 1
|
||||
assert len(env.G.nodes) == 4
|
||||
assert len(env.agents(group="network")) == 4
|
||||
assert len(env.agents(group="environment")) == 1
|
||||
|
||||
def test_yaml(self):
|
||||
"""
|
||||
@@ -100,16 +93,17 @@ class TestConfig(TestCase):
|
||||
Values not present in the original config file should have reasonable
|
||||
defaults.
|
||||
"""
|
||||
with utils.timer('loading'):
|
||||
config = serialization.load_file(join(EXAMPLES, 'complete.yml'))[0]
|
||||
with utils.timer("loading"):
|
||||
config = serialization.load_file(join(EXAMPLES, "complete.yml"))[0]
|
||||
s = simulation.from_config(config)
|
||||
with utils.timer('serializing'):
|
||||
with utils.timer("serializing"):
|
||||
serial = s.to_yaml()
|
||||
with utils.timer('recovering'):
|
||||
with utils.timer("recovering"):
|
||||
recovered = yaml.load(serial, Loader=yaml.SafeLoader)
|
||||
for (k, v) in config.items():
|
||||
assert recovered[k] == v
|
||||
|
||||
|
||||
def make_example_test(path, cfg):
|
||||
def wrapped(self):
|
||||
root = os.getcwd()
|
||||
@@ -133,18 +127,19 @@ def make_example_test(path, cfg):
|
||||
# assert env.now <= config['max_time'] # But not further than allowed
|
||||
# except KeyError:
|
||||
# pass
|
||||
|
||||
return wrapped
|
||||
|
||||
|
||||
def add_example_tests():
|
||||
for config, path in serialization.load_files(
|
||||
join(EXAMPLES, '*', '*.yml'),
|
||||
join(EXAMPLES, '*.yml'),
|
||||
join(EXAMPLES, "*", "*.yml"),
|
||||
join(EXAMPLES, "*.yml"),
|
||||
):
|
||||
p = make_example_test(path=path, cfg=config)
|
||||
fname = os.path.basename(path)
|
||||
p.__name__ = 'test_example_file_%s' % fname
|
||||
p.__doc__ = '%s should be a valid configuration' % fname
|
||||
p.__name__ = "test_example_file_%s" % fname
|
||||
p.__doc__ = "%s should be a valid configuration" % fname
|
||||
setattr(TestConfig, p.__name__, p)
|
||||
del p
|
||||
|
||||
|
||||
@@ -5,9 +5,9 @@ from os.path import join
|
||||
from soil import serialization, simulation, config
|
||||
|
||||
ROOT = os.path.abspath(os.path.dirname(__file__))
|
||||
EXAMPLES = join(ROOT, '..', 'examples')
|
||||
EXAMPLES = join(ROOT, "..", "examples")
|
||||
|
||||
FORCE_TESTS = os.environ.get('FORCE_TESTS', '')
|
||||
FORCE_TESTS = os.environ.get("FORCE_TESTS", "")
|
||||
|
||||
|
||||
class TestExamples(TestCase):
|
||||
@@ -23,31 +23,31 @@ def make_example_test(path, cfg):
|
||||
s.max_steps = 100
|
||||
s.num_trials = 1
|
||||
assert isinstance(cfg, config.Config)
|
||||
if getattr(cfg, 'skip_test', False) and not FORCE_TESTS:
|
||||
self.skipTest('Example ignored.')
|
||||
if getattr(cfg, "skip_test", False) and not FORCE_TESTS:
|
||||
self.skipTest("Example ignored.")
|
||||
envs = s.run_simulation(dry_run=True)
|
||||
assert envs
|
||||
for env in envs:
|
||||
assert env
|
||||
try:
|
||||
n = cfg.model_params['network_params']['n']
|
||||
n = cfg.model_params["network_params"]["n"]
|
||||
assert len(list(env.network_agents)) == n
|
||||
except KeyError:
|
||||
pass
|
||||
assert env.schedule.steps > 0 # It has run
|
||||
assert env.schedule.steps <= s.max_steps # But not further than allowed
|
||||
|
||||
return wrapped
|
||||
|
||||
|
||||
def add_example_tests():
|
||||
for cfg, path in serialization.load_files(
|
||||
join(EXAMPLES, '*', '*.yml'),
|
||||
join(EXAMPLES, '*.yml'),
|
||||
join(EXAMPLES, "**", "*.yml"),
|
||||
):
|
||||
p = make_example_test(path=path, cfg=config.Config.from_raw(cfg))
|
||||
fname = os.path.basename(path)
|
||||
p.__name__ = 'test_example_file_%s' % fname
|
||||
p.__doc__ = '%s should be a valid configuration' % fname
|
||||
p.__name__ = "test_example_file_%s" % fname
|
||||
p.__doc__ = "%s should be a valid configuration" % fname
|
||||
setattr(TestExamples, p.__name__, p)
|
||||
del p
|
||||
|
||||
|
||||
@@ -2,6 +2,7 @@ import os
|
||||
import io
|
||||
import tempfile
|
||||
import shutil
|
||||
import sqlite3
|
||||
|
||||
from unittest import TestCase
|
||||
from soil import exporters
|
||||
@@ -40,20 +41,15 @@ class Exporters(TestCase):
|
||||
num_trials = 5
|
||||
max_time = 2
|
||||
config = {
|
||||
'name': 'exporter_sim',
|
||||
'model_params': {
|
||||
'agents': [{
|
||||
'agent_class': agents.BaseAgent
|
||||
}]
|
||||
},
|
||||
'max_time': max_time,
|
||||
'num_trials': num_trials,
|
||||
"name": "exporter_sim",
|
||||
"model_params": {"agents": [{"agent_class": agents.BaseAgent}]},
|
||||
"max_time": max_time,
|
||||
"num_trials": num_trials,
|
||||
}
|
||||
s = simulation.from_config(config)
|
||||
|
||||
for env in s.run_simulation(exporters=[Dummy], dry_run=True):
|
||||
assert len(env.agents) == 1
|
||||
assert env.now == max_time
|
||||
|
||||
assert Dummy.started
|
||||
assert Dummy.ended
|
||||
@@ -64,40 +60,52 @@ class Exporters(TestCase):
|
||||
assert Dummy.total_time == max_time * num_trials
|
||||
|
||||
def test_writing(self):
|
||||
'''Try to write CSV, sqlite and YAML (without dry_run)'''
|
||||
"""Try to write CSV, sqlite and YAML (without dry_run)"""
|
||||
n_trials = 5
|
||||
config = {
|
||||
'name': 'exporter_sim',
|
||||
'network_params': {
|
||||
'generator': 'complete_graph',
|
||||
'n': 4
|
||||
},
|
||||
'agent_class': 'CounterModel',
|
||||
'max_time': 2,
|
||||
'num_trials': n_trials,
|
||||
'dry_run': False,
|
||||
'environment_params': {}
|
||||
"name": "exporter_sim",
|
||||
"network_params": {"generator": "complete_graph", "n": 4},
|
||||
"agent_class": "CounterModel",
|
||||
"max_time": 2,
|
||||
"num_trials": n_trials,
|
||||
"dry_run": False,
|
||||
"environment_params": {},
|
||||
}
|
||||
output = io.StringIO()
|
||||
s = simulation.from_config(config)
|
||||
tmpdir = tempfile.mkdtemp()
|
||||
envs = s.run_simulation(exporters=[
|
||||
exporters.default,
|
||||
exporters.csv,
|
||||
],
|
||||
dry_run=False,
|
||||
outdir=tmpdir,
|
||||
exporter_params={'copy_to': output})
|
||||
envs = s.run_simulation(
|
||||
exporters=[
|
||||
exporters.default,
|
||||
exporters.csv,
|
||||
],
|
||||
model_params={
|
||||
"agent_reporters": {"times": "times"},
|
||||
"model_reporters": {
|
||||
"constant": lambda x: 1,
|
||||
},
|
||||
},
|
||||
dry_run=False,
|
||||
outdir=tmpdir,
|
||||
exporter_params={"copy_to": output},
|
||||
)
|
||||
result = output.getvalue()
|
||||
|
||||
simdir = os.path.join(tmpdir, s.group or '', s.name)
|
||||
with open(os.path.join(simdir, '{}.dumped.yml'.format(s.name))) as f:
|
||||
simdir = os.path.join(tmpdir, s.group or "", s.name)
|
||||
with open(os.path.join(simdir, "{}.dumped.yml".format(s.name))) as f:
|
||||
result = f.read()
|
||||
assert result
|
||||
|
||||
try:
|
||||
for e in envs:
|
||||
with open(os.path.join(simdir, '{}.env.csv'.format(e.id))) as f:
|
||||
db = sqlite3.connect(os.path.join(simdir, f"{s.name}.sqlite"))
|
||||
cur = db.cursor()
|
||||
agent_entries = cur.execute("SELECT * from agents").fetchall()
|
||||
env_entries = cur.execute("SELECT * from env").fetchall()
|
||||
assert len(agent_entries) > 0
|
||||
assert len(env_entries) > 0
|
||||
|
||||
with open(os.path.join(simdir, "{}.env.csv".format(e.id))) as f:
|
||||
result = f.read()
|
||||
assert result
|
||||
finally:
|
||||
|
||||
@@ -6,60 +6,55 @@ import networkx as nx
|
||||
from functools import partial
|
||||
|
||||
from os.path import join
|
||||
from soil import (simulation, Environment, agents, network, serialization,
|
||||
utils, config)
|
||||
from soil import simulation, Environment, agents, network, serialization, utils, config
|
||||
from soil.time import Delta
|
||||
|
||||
ROOT = os.path.abspath(os.path.dirname(__file__))
|
||||
EXAMPLES = join(ROOT, '..', 'examples')
|
||||
EXAMPLES = join(ROOT, "..", "examples")
|
||||
|
||||
|
||||
class CustomAgent(agents.FSM, agents.NetworkAgent):
|
||||
@agents.default_state
|
||||
@agents.state
|
||||
def normal(self):
|
||||
self.neighbors = self.count_agents(state_id='normal',
|
||||
limit_neighbors=True)
|
||||
self.neighbors = self.count_agents(state_id="normal", limit_neighbors=True)
|
||||
|
||||
@agents.state
|
||||
def unreachable(self):
|
||||
return
|
||||
|
||||
|
||||
class TestMain(TestCase):
|
||||
|
||||
def test_empty_simulation(self):
|
||||
"""A simulation with a base behaviour should do nothing"""
|
||||
config = {
|
||||
'model_params': {
|
||||
'network_params': {
|
||||
'path': join(ROOT, 'test.gexf')
|
||||
},
|
||||
'agent_class': 'BaseAgent',
|
||||
"model_params": {
|
||||
"network_params": {"path": join(ROOT, "test.gexf")},
|
||||
"agent_class": "BaseAgent",
|
||||
}
|
||||
}
|
||||
s = simulation.from_config(config)
|
||||
s.run_simulation(dry_run=True)
|
||||
|
||||
|
||||
def test_network_agent(self):
|
||||
"""
|
||||
The initial states should be applied to the agent and the
|
||||
agent should be able to update its state."""
|
||||
config = {
|
||||
'name': 'CounterAgent',
|
||||
'num_trials': 1,
|
||||
'max_time': 2,
|
||||
'model_params': {
|
||||
'network_params': {
|
||||
'generator': nx.complete_graph,
|
||||
'n': 2,
|
||||
"name": "CounterAgent",
|
||||
"num_trials": 1,
|
||||
"max_time": 2,
|
||||
"model_params": {
|
||||
"network_params": {
|
||||
"generator": nx.complete_graph,
|
||||
"n": 2,
|
||||
},
|
||||
'agent_class': 'CounterModel',
|
||||
'states': {
|
||||
0: {'times': 10},
|
||||
1: {'times': 20},
|
||||
"agent_class": "CounterModel",
|
||||
"states": {
|
||||
0: {"times": 10},
|
||||
1: {"times": 20},
|
||||
},
|
||||
}
|
||||
},
|
||||
}
|
||||
s = simulation.from_config(config)
|
||||
|
||||
@@ -68,48 +63,41 @@ class TestMain(TestCase):
|
||||
The initial states should be applied to the agent and the
|
||||
agent should be able to update its state."""
|
||||
config = {
|
||||
'version': '2',
|
||||
'name': 'CounterAgent',
|
||||
'dry_run': True,
|
||||
'num_trials': 1,
|
||||
'max_time': 2,
|
||||
'model_params': {
|
||||
'topologies': {
|
||||
'default': {
|
||||
'path': join(ROOT, 'test.gexf')
|
||||
}
|
||||
"version": "2",
|
||||
"name": "CounterAgent",
|
||||
"dry_run": True,
|
||||
"num_trials": 1,
|
||||
"max_time": 2,
|
||||
"model_params": {
|
||||
"topology": {"path": join(ROOT, "test.gexf")},
|
||||
"agents": {
|
||||
"agent_class": "CounterModel",
|
||||
"topology": True,
|
||||
"fixed": [{"state": {"times": 10}}, {"state": {"times": 20}}],
|
||||
},
|
||||
'agents': {
|
||||
'agent_class': 'CounterModel',
|
||||
'topology': 'default',
|
||||
'fixed': [{'state': {'times': 10}}, {'state': {'times': 20}}],
|
||||
}
|
||||
}
|
||||
},
|
||||
}
|
||||
s = simulation.from_config(config)
|
||||
env = s.get_env()
|
||||
assert isinstance(env.agents[0], agents.CounterModel)
|
||||
assert env.agents[0].G == env.topologies['default']
|
||||
assert env.agents[0]['times'] == 10
|
||||
assert env.agents[0]['times'] == 10
|
||||
assert env.agents[0].G == env.G
|
||||
assert env.agents[0]["times"] == 10
|
||||
assert env.agents[0]["times"] == 10
|
||||
env.step()
|
||||
assert env.agents[0]['times'] == 11
|
||||
assert env.agents[1]['times'] == 21
|
||||
assert env.agents[0]["times"] == 11
|
||||
assert env.agents[1]["times"] == 21
|
||||
|
||||
def test_init_and_count_agents(self):
|
||||
"""Agents should be properly initialized and counting should filter them properly"""
|
||||
#TODO: separate this test into two or more test cases
|
||||
# TODO: separate this test into two or more test cases
|
||||
config = {
|
||||
'max_time': 10,
|
||||
'model_params': {
|
||||
'agents': [{'agent_class': CustomAgent, 'weight': 1, 'topology': 'default'},
|
||||
{'agent_class': CustomAgent, 'weight': 3, 'topology': 'default'},
|
||||
"max_time": 10,
|
||||
"model_params": {
|
||||
"agents": [
|
||||
{"agent_class": CustomAgent, "weight": 1, "topology": True},
|
||||
{"agent_class": CustomAgent, "weight": 3, "topology": True},
|
||||
],
|
||||
'topologies': {
|
||||
'default': {
|
||||
'path': join(ROOT, 'test.gexf')
|
||||
}
|
||||
},
|
||||
"topology": {"path": join(ROOT, "test.gexf")},
|
||||
},
|
||||
}
|
||||
s = simulation.from_config(config)
|
||||
@@ -120,40 +108,45 @@ class TestMain(TestCase):
|
||||
assert env.count_agents(weight=3) == 1
|
||||
assert env.count_agents(agent_class=CustomAgent) == 2
|
||||
|
||||
|
||||
def test_torvalds_example(self):
|
||||
"""A complete example from a documentation should work."""
|
||||
config = serialization.load_file(join(EXAMPLES, 'torvalds.yml'))[0]
|
||||
config['model_params']['network_params']['path'] = join(EXAMPLES,
|
||||
config['model_params']['network_params']['path'])
|
||||
config = serialization.load_file(join(EXAMPLES, "torvalds.yml"))[0]
|
||||
config["model_params"]["network_params"]["path"] = join(
|
||||
EXAMPLES, config["model_params"]["network_params"]["path"]
|
||||
)
|
||||
s = simulation.from_config(config)
|
||||
env = s.run_simulation(dry_run=True)[0]
|
||||
for a in env.network_agents:
|
||||
skill_level = a.state['skill_level']
|
||||
if a.id == 'Torvalds':
|
||||
assert skill_level == 'God'
|
||||
assert a.state['total'] == 3
|
||||
assert a.state['neighbors'] == 2
|
||||
elif a.id == 'balkian':
|
||||
assert skill_level == 'developer'
|
||||
assert a.state['total'] == 3
|
||||
assert a.state['neighbors'] == 1
|
||||
skill_level = a.state["skill_level"]
|
||||
if a.id == "Torvalds":
|
||||
assert skill_level == "God"
|
||||
assert a.state["total"] == 3
|
||||
assert a.state["neighbors"] == 2
|
||||
elif a.id == "balkian":
|
||||
assert skill_level == "developer"
|
||||
assert a.state["total"] == 3
|
||||
assert a.state["neighbors"] == 1
|
||||
else:
|
||||
assert skill_level == 'beginner'
|
||||
assert a.state['total'] == 3
|
||||
assert a.state['neighbors'] == 1
|
||||
assert skill_level == "beginner"
|
||||
assert a.state["total"] == 3
|
||||
assert a.state["neighbors"] == 1
|
||||
|
||||
def test_serialize_class(self):
|
||||
ser, name = serialization.serialize(agents.BaseAgent, known_modules=[])
|
||||
assert name == 'soil.agents.BaseAgent'
|
||||
assert name == "soil.agents.BaseAgent"
|
||||
assert ser == agents.BaseAgent
|
||||
|
||||
ser, name = serialization.serialize(agents.BaseAgent, known_modules=['soil', ])
|
||||
assert name == 'BaseAgent'
|
||||
ser, name = serialization.serialize(
|
||||
agents.BaseAgent,
|
||||
known_modules=[
|
||||
"soil",
|
||||
],
|
||||
)
|
||||
assert name == "BaseAgent"
|
||||
assert ser == agents.BaseAgent
|
||||
|
||||
ser, name = serialization.serialize(CustomAgent)
|
||||
assert name == 'test_main.CustomAgent'
|
||||
assert name == "test_main.CustomAgent"
|
||||
assert ser == CustomAgent
|
||||
pickle.dumps(ser)
|
||||
|
||||
@@ -166,72 +159,43 @@ class TestMain(TestCase):
|
||||
assert i == des
|
||||
|
||||
def test_serialize_agent_class(self):
|
||||
'''A class from soil.agents should be serialized without the module part'''
|
||||
ser = agents.serialize_type(CustomAgent)
|
||||
assert ser == 'test_main.CustomAgent'
|
||||
ser = agents.serialize_type(agents.BaseAgent)
|
||||
assert ser == 'BaseAgent'
|
||||
"""A class from soil.agents should be serialized without the module part"""
|
||||
ser = agents._serialize_type(CustomAgent)
|
||||
assert ser == "test_main.CustomAgent"
|
||||
ser = agents._serialize_type(agents.BaseAgent)
|
||||
assert ser == "BaseAgent"
|
||||
pickle.dumps(ser)
|
||||
|
||||
def test_deserialize_agent_distribution(self):
|
||||
agent_distro = [
|
||||
{
|
||||
'agent_class': 'CounterModel',
|
||||
'weight': 1
|
||||
},
|
||||
{
|
||||
'agent_class': 'test_main.CustomAgent',
|
||||
'weight': 2
|
||||
},
|
||||
]
|
||||
converted = agents.deserialize_definition(agent_distro)
|
||||
assert converted[0]['agent_class'] == agents.CounterModel
|
||||
assert converted[1]['agent_class'] == CustomAgent
|
||||
pickle.dumps(converted)
|
||||
|
||||
def test_serialize_agent_distribution(self):
|
||||
agent_distro = [
|
||||
{
|
||||
'agent_class': agents.CounterModel,
|
||||
'weight': 1
|
||||
},
|
||||
{
|
||||
'agent_class': CustomAgent,
|
||||
'weight': 2
|
||||
},
|
||||
]
|
||||
converted = agents.serialize_definition(agent_distro)
|
||||
assert converted[0]['agent_class'] == 'CounterModel'
|
||||
assert converted[1]['agent_class'] == 'test_main.CustomAgent'
|
||||
pickle.dumps(converted)
|
||||
|
||||
def test_templates(self):
|
||||
'''Loading a template should result in several configs'''
|
||||
configs = serialization.load_file(join(EXAMPLES, 'template.yml'))
|
||||
"""Loading a template should result in several configs"""
|
||||
configs = serialization.load_file(join(EXAMPLES, "template.yml"))
|
||||
assert len(configs) > 0
|
||||
|
||||
def test_until(self):
|
||||
config = {
|
||||
'name': 'until_sim',
|
||||
'model_params': {
|
||||
'network_params': {},
|
||||
'agents': {
|
||||
'fixed': [{
|
||||
'agent_class': agents.BaseAgent,
|
||||
}]
|
||||
"name": "until_sim",
|
||||
"model_params": {
|
||||
"network_params": {},
|
||||
"agents": {
|
||||
"fixed": [
|
||||
{
|
||||
"agent_class": agents.BaseAgent,
|
||||
}
|
||||
]
|
||||
},
|
||||
},
|
||||
'max_time': 2,
|
||||
'num_trials': 50,
|
||||
"max_time": 2,
|
||||
"num_trials": 50,
|
||||
}
|
||||
s = simulation.from_config(config)
|
||||
runs = list(s.run_simulation(dry_run=True))
|
||||
over = list(x.now for x in runs if x.now > 2)
|
||||
assert len(runs) == config['num_trials']
|
||||
assert len(runs) == config["num_trials"]
|
||||
assert len(over) == 0
|
||||
|
||||
def test_fsm(self):
|
||||
'''Basic state change'''
|
||||
"""Basic state change"""
|
||||
|
||||
class ToggleAgent(agents.FSM):
|
||||
@agents.default_state
|
||||
@agents.state
|
||||
@@ -250,7 +214,8 @@ class TestMain(TestCase):
|
||||
assert a.state_id == a.ping.id
|
||||
|
||||
def test_fsm_when(self):
|
||||
'''Basic state change'''
|
||||
"""Basic state change"""
|
||||
|
||||
class ToggleAgent(agents.FSM):
|
||||
@agents.default_state
|
||||
@agents.state
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
'''
|
||||
"""
|
||||
Mesa-SOIL integration tests
|
||||
|
||||
We have to test that:
|
||||
@@ -8,13 +8,15 @@ We have to test that:
|
||||
|
||||
- Mesa visualizations work with SOIL simulations
|
||||
|
||||
'''
|
||||
"""
|
||||
from mesa import Agent, Model
|
||||
from mesa.time import RandomActivation
|
||||
from mesa.space import MultiGrid
|
||||
|
||||
|
||||
class MoneyAgent(Agent):
|
||||
""" An agent with fixed initial wealth."""
|
||||
"""An agent with fixed initial wealth."""
|
||||
|
||||
def __init__(self, unique_id, model):
|
||||
super().__init__(unique_id, model)
|
||||
self.wealth = 1
|
||||
@@ -33,15 +35,15 @@ class MoneyAgent(Agent):
|
||||
|
||||
def move(self):
|
||||
possible_steps = self.model.grid.get_neighborhood(
|
||||
self.pos,
|
||||
moore=True,
|
||||
include_center=False)
|
||||
self.pos, moore=True, include_center=False
|
||||
)
|
||||
new_position = self.random.choice(possible_steps)
|
||||
self.model.grid.move_agent(self, new_position)
|
||||
|
||||
|
||||
class MoneyModel(Model):
|
||||
"""A model with some number of agents."""
|
||||
|
||||
def __init__(self, N, width, height):
|
||||
self.num_agents = N
|
||||
self.grid = MultiGrid(width, height, True)
|
||||
@@ -58,7 +60,7 @@ class MoneyModel(Model):
|
||||
self.grid.place_agent(a, (x, y))
|
||||
|
||||
def step(self):
|
||||
'''Advance the model by one step.'''
|
||||
"""Advance the model by one step."""
|
||||
self.schedule.step()
|
||||
|
||||
|
||||
|
||||
@@ -10,7 +10,7 @@ from soil import config, network, environment, agents, simulation
|
||||
from test_main import CustomAgent
|
||||
|
||||
ROOT = os.path.abspath(os.path.dirname(__file__))
|
||||
EXAMPLES = join(ROOT, '..', 'examples')
|
||||
EXAMPLES = join(ROOT, "..", "examples")
|
||||
|
||||
|
||||
class TestNetwork(TestCase):
|
||||
@@ -19,21 +19,13 @@ class TestNetwork(TestCase):
|
||||
Load a graph from file if the extension is known.
|
||||
Raise an exception otherwise.
|
||||
"""
|
||||
config = {
|
||||
'network_params': {
|
||||
'path': join(ROOT, 'test.gexf')
|
||||
}
|
||||
}
|
||||
G = network.from_config(config['network_params'])
|
||||
config = {"network_params": {"path": join(ROOT, "test.gexf")}}
|
||||
G = network.from_config(config["network_params"])
|
||||
assert G
|
||||
assert len(G) == 2
|
||||
with self.assertRaises(AttributeError):
|
||||
config = {
|
||||
'network_params': {
|
||||
'path': join(ROOT, 'unknown.extension')
|
||||
}
|
||||
}
|
||||
G = network.from_config(config['network_params'])
|
||||
config = {"network_params": {"path": join(ROOT, "unknown.extension")}}
|
||||
G = network.from_config(config["network_params"])
|
||||
print(G)
|
||||
|
||||
def test_generate_barabasi(self):
|
||||
@@ -41,15 +33,11 @@ class TestNetwork(TestCase):
|
||||
If no path is given, a generator and network parameters
|
||||
should be used to generate a network
|
||||
"""
|
||||
cfg = {
|
||||
'params': {
|
||||
'generator': 'barabasi_albert_graph'
|
||||
}
|
||||
}
|
||||
cfg = {"params": {"generator": "barabasi_albert_graph"}}
|
||||
with self.assertRaises(Exception):
|
||||
G = network.from_config(cfg)
|
||||
cfg['params']['n'] = 100
|
||||
cfg['params']['m'] = 10
|
||||
cfg["params"]["n"] = 100
|
||||
cfg["params"]["m"] = 10
|
||||
G = network.from_config(cfg)
|
||||
assert len(G) == 100
|
||||
|
||||
@@ -61,68 +49,57 @@ class TestNetwork(TestCase):
|
||||
G = nx.random_geometric_graph(20, 0.1)
|
||||
env = environment.NetworkEnvironment(topology=G)
|
||||
f = io.BytesIO()
|
||||
assert env.topologies['default']
|
||||
network.dump_gexf(env.topologies['default'], f)
|
||||
assert env.G
|
||||
network.dump_gexf(env.G, f)
|
||||
|
||||
def test_networkenvironment_creation(self):
|
||||
"""Networkenvironment should accept netconfig as parameters"""
|
||||
model_params = {
|
||||
'topologies': {
|
||||
'default': {
|
||||
'path': join(ROOT, 'test.gexf')
|
||||
}
|
||||
"topology": {"path": join(ROOT, "test.gexf")},
|
||||
"agents": {
|
||||
"topology": True,
|
||||
"distribution": [
|
||||
{
|
||||
"agent_class": CustomAgent,
|
||||
}
|
||||
],
|
||||
},
|
||||
'agents': {
|
||||
'topology': 'default',
|
||||
'distribution': [{
|
||||
'agent_class': CustomAgent,
|
||||
}]
|
||||
}
|
||||
}
|
||||
env = environment.Environment(**model_params)
|
||||
assert env.topologies
|
||||
assert env.G
|
||||
env.step()
|
||||
assert len(env.topologies['default']) == 2
|
||||
assert len(env.G) == 2
|
||||
assert len(env.agents) == 2
|
||||
assert env.agents[1].count_agents(state_id='normal') == 2
|
||||
assert env.agents[1].count_agents(state_id='normal', limit_neighbors=True) == 1
|
||||
assert env.agents[1].count_agents(state_id="normal") == 2
|
||||
assert env.agents[1].count_agents(state_id="normal", limit_neighbors=True) == 1
|
||||
assert env.agents[0].neighbors == 1
|
||||
|
||||
def test_custom_agent_neighbors(self):
|
||||
"""Allow for search of neighbors with a certain state_id"""
|
||||
config = {
|
||||
'model_params': {
|
||||
'topologies': {
|
||||
'default': {
|
||||
'path': join(ROOT, 'test.gexf')
|
||||
}
|
||||
},
|
||||
'agents': {
|
||||
'topology': 'default',
|
||||
'distribution': [
|
||||
{
|
||||
'weight': 1,
|
||||
'agent_class': CustomAgent
|
||||
}
|
||||
]
|
||||
}
|
||||
"model_params": {
|
||||
"topology": {"path": join(ROOT, "test.gexf")},
|
||||
"agents": {
|
||||
"topology": True,
|
||||
"distribution": [{"weight": 1, "agent_class": CustomAgent}],
|
||||
},
|
||||
},
|
||||
'max_time': 10,
|
||||
"max_time": 10,
|
||||
}
|
||||
s = simulation.from_config(config)
|
||||
env = s.run_simulation(dry_run=True)[0]
|
||||
assert env.agents[1].count_agents(state_id='normal') == 2
|
||||
assert env.agents[1].count_agents(state_id='normal', limit_neighbors=True) == 1
|
||||
assert env.agents[1].count_agents(state_id="normal") == 2
|
||||
assert env.agents[1].count_agents(state_id="normal", limit_neighbors=True) == 1
|
||||
assert env.agents[0].neighbors == 1
|
||||
|
||||
def test_subgraph(self):
|
||||
'''An agent should be able to subgraph the global topology'''
|
||||
"""An agent should be able to subgraph the global topology"""
|
||||
G = nx.Graph()
|
||||
G.add_node(3)
|
||||
G.add_edge(1, 2)
|
||||
distro = agents.calculate_distribution(agent_class=agents.NetworkAgent)
|
||||
aconfig = config.AgentConfig(distribution=distro, topology='default')
|
||||
env = environment.Environment(name='Test', topologies={'default': G}, agents=aconfig)
|
||||
aconfig = config.AgentConfig(distribution=distro, topology=True)
|
||||
env = environment.Environment(name="Test", topology=G, agents=aconfig)
|
||||
lst = list(env.network_agents)
|
||||
|
||||
a2 = env.find_one(node_id=2)
|
||||
|
||||
74
tests/test_time.py
Normal file
74
tests/test_time.py
Normal file
@@ -0,0 +1,74 @@
|
||||
from unittest import TestCase
|
||||
|
||||
from soil import time, agents, environment
|
||||
|
||||
class TestMain(TestCase):
|
||||
def test_cond(self):
|
||||
'''
|
||||
A condition should match a When if the concition is True
|
||||
'''
|
||||
|
||||
t = time.Cond(lambda t: True)
|
||||
f = time.Cond(lambda t: False)
|
||||
for i in range(10):
|
||||
w = time.When(i)
|
||||
assert w == t
|
||||
assert w is not f
|
||||
|
||||
def test_cond(self):
|
||||
'''
|
||||
Comparing a Cond to a Delta should always return False
|
||||
'''
|
||||
|
||||
c = time.Cond(lambda t: False)
|
||||
d = time.Delta(1)
|
||||
assert c is not d
|
||||
|
||||
def test_cond_env(self):
|
||||
'''
|
||||
'''
|
||||
|
||||
times_started = []
|
||||
times_awakened = []
|
||||
times = []
|
||||
done = 0
|
||||
|
||||
class CondAgent(agents.BaseAgent):
|
||||
|
||||
def step(self):
|
||||
nonlocal done
|
||||
times_started.append(self.now)
|
||||
while True:
|
||||
yield time.Cond(lambda agent: agent.model.schedule.time >= 10)
|
||||
times_awakened.append(self.now)
|
||||
if self.now >= 10:
|
||||
break
|
||||
done += 1
|
||||
|
||||
env = environment.Environment(agents=[{'agent_class': CondAgent}])
|
||||
|
||||
|
||||
while env.schedule.time < 11:
|
||||
env.step()
|
||||
times.append(env.now)
|
||||
assert env.schedule.time == 11
|
||||
assert times_started == [0]
|
||||
assert times_awakened == [10]
|
||||
assert done == 1
|
||||
# The first time will produce the Cond.
|
||||
# Since there are no other agents, time will not advance, but the number
|
||||
# of steps will.
|
||||
assert env.schedule.steps == 12
|
||||
assert len(times) == 12
|
||||
|
||||
while env.schedule.time < 12:
|
||||
env.step()
|
||||
times.append(env.now)
|
||||
|
||||
assert env.schedule.time == 12
|
||||
assert times_started == [0, 11]
|
||||
assert times_awakened == [10, 11]
|
||||
assert done == 2
|
||||
# Once more to yield the cond, another one to continue
|
||||
assert env.schedule.steps == 14
|
||||
assert len(times) == 14
|
||||
Reference in New Issue
Block a user