mirror of
https://github.com/gsi-upm/soil
synced 2025-10-27 21:58:17 +00:00
Compare commits
11 Commits
cd62c23cb9
...
0.30.0rc2
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
a2fb25c160 | ||
|
|
5fcf610108 | ||
|
|
159c9a9077 | ||
|
|
3776c4e5c5 | ||
|
|
880a9f2a1c | ||
|
|
227fdf050e | ||
|
|
5d759d0072 | ||
|
|
77d08fc592 | ||
|
|
0efcd24d90 | ||
|
|
78833a9e08 | ||
|
|
d9947c2c52 |
12
CHANGELOG.md
12
CHANGELOG.md
@@ -3,16 +3,22 @@ All notable changes to this project will be documented in this file.
|
|||||||
|
|
||||||
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
|
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
|
||||||
|
|
||||||
## [0.3 UNRELEASED]
|
## [0.30 UNRELEASED]
|
||||||
### Added
|
### Added
|
||||||
* Simple debugging capabilities, with a custom `pdb.Debugger` subclass that exposes commands to list agents and their status and set breakpoints on states (for FSM agents)
|
* Simple debugging capabilities in `soil.debugging`, with a custom `pdb.Debugger` subclass that exposes commands to list agents and their status and set breakpoints on states (for FSM agents). Try it with `soil --debug <simulation file>`
|
||||||
|
* Ability to run
|
||||||
|
* Ability to
|
||||||
|
* The `soil.exporters` module to export the results of datacollectors (model.datacollector) into files at the end of trials/simulations
|
||||||
|
* A modular set of classes for environments/models. Now the ability to configure the agents through an agent definition and a topology through a network configuration is split into two classes (`soil.agents.BaseEnvironment` for agents, `soil.agents.NetworkEnvironment` to add topology).
|
||||||
|
* FSM agents can now have generators as states. They work similar to normal states, with one caveat. Only `time` values can be yielded, not a state. This is because the state will not change, it will be resumed after the yield, at the appropriate time. The return value *can* be a state, or a `(state, time)` tuple, just like in normal states.
|
||||||
### Changed
|
### Changed
|
||||||
* Configuration schema is very different now. Check `soil.config` for more information. We are also using Pydantic for (de)serialization.
|
* Configuration schema is very different now. Check `soil.config` for more information. We are also using Pydantic for (de)serialization.
|
||||||
* There may be more than one topology/network in the simulation
|
* There may be more than one topology/network in the simulation
|
||||||
* Agents are split into groups now. Each group may be assigned a given set of agents or an agent distribution, and a network topology to be assigned to.
|
* Ability
|
||||||
### Removed
|
### Removed
|
||||||
* Any `tsih` and `History` integration in the main classes. To record the state of environments/agents, just use a datacollector. In some cases this may be slower or consume more memory than the previous system. However, few cases actually used the full potential of the history, and it came at the cost of unnecessary complexity and worse performance for the majority of cases.
|
* Any `tsih` and `History` integration in the main classes. To record the state of environments/agents, just use a datacollector. In some cases this may be slower or consume more memory than the previous system. However, few cases actually used the full potential of the history, and it came at the cost of unnecessary complexity and worse performance for the majority of cases.
|
||||||
|
|
||||||
|
|
||||||
## [0.20.7]
|
## [0.20.7]
|
||||||
### Changed
|
### Changed
|
||||||
* Creating a `time.When` from another `time.When` does not nest them anymore (it returns the argument)
|
* Creating a `time.When` from another `time.When` does not nest them anymore (it returns the argument)
|
||||||
|
|||||||
@@ -10,19 +10,14 @@ seed: "CompleteSeed!"
|
|||||||
model_class: Environment
|
model_class: Environment
|
||||||
model_params:
|
model_params:
|
||||||
am_i_complete: true
|
am_i_complete: true
|
||||||
topologies:
|
topology:
|
||||||
default:
|
|
||||||
params:
|
params:
|
||||||
generator: complete_graph
|
generator: complete_graph
|
||||||
n: 10
|
n: 12
|
||||||
another_graph:
|
|
||||||
params:
|
|
||||||
generator: complete_graph
|
|
||||||
n: 2
|
|
||||||
environment:
|
environment:
|
||||||
agents:
|
agents:
|
||||||
agent_class: CounterModel
|
agent_class: CounterModel
|
||||||
topology: default
|
topology: true
|
||||||
state:
|
state:
|
||||||
times: 1
|
times: 1
|
||||||
# In this group we are not specifying any topology
|
# In this group we are not specifying any topology
|
||||||
@@ -30,25 +25,23 @@ model_params:
|
|||||||
- name: 'Environment Agent 1'
|
- name: 'Environment Agent 1'
|
||||||
agent_class: BaseAgent
|
agent_class: BaseAgent
|
||||||
group: environment
|
group: environment
|
||||||
topology: null
|
topology: false
|
||||||
hidden: true
|
hidden: true
|
||||||
state:
|
state:
|
||||||
times: 10
|
times: 10
|
||||||
- agent_class: CounterModel
|
- agent_class: CounterModel
|
||||||
id: 0
|
id: 0
|
||||||
group: other_counters
|
group: fixed_counters
|
||||||
topology: another_graph
|
|
||||||
state:
|
state:
|
||||||
times: 1
|
times: 1
|
||||||
total: 0
|
total: 0
|
||||||
- agent_class: CounterModel
|
- agent_class: CounterModel
|
||||||
topology: another_graph
|
group: fixed_counters
|
||||||
group: other_counters
|
|
||||||
id: 1
|
id: 1
|
||||||
distribution:
|
distribution:
|
||||||
- agent_class: CounterModel
|
- agent_class: CounterModel
|
||||||
weight: 1
|
weight: 1
|
||||||
group: general_counters
|
group: distro_counters
|
||||||
state:
|
state:
|
||||||
times: 3
|
times: 3
|
||||||
- agent_class: AggregatedCounter
|
- agent_class: AggregatedCounter
|
||||||
|
|||||||
@@ -1,63 +0,0 @@
|
|||||||
---
|
|
||||||
version: '2'
|
|
||||||
id: simple
|
|
||||||
group: tests
|
|
||||||
dir_path: "/tmp/"
|
|
||||||
num_trials: 3
|
|
||||||
max_steps: 100
|
|
||||||
interval: 1
|
|
||||||
seed: "CompleteSeed!"
|
|
||||||
model_class: "soil.Environment"
|
|
||||||
model_params:
|
|
||||||
topologies:
|
|
||||||
default:
|
|
||||||
params:
|
|
||||||
generator: complete_graph
|
|
||||||
n: 10
|
|
||||||
another_graph:
|
|
||||||
params:
|
|
||||||
generator: complete_graph
|
|
||||||
n: 2
|
|
||||||
agents:
|
|
||||||
# The values here will be used as default values for any agent
|
|
||||||
agent_class: CounterModel
|
|
||||||
topology: default
|
|
||||||
state:
|
|
||||||
times: 1
|
|
||||||
# This specifies a distribution of agents, each with a `weight` or an explicit number of agents
|
|
||||||
distribution:
|
|
||||||
- agent_class: CounterModel
|
|
||||||
weight: 1
|
|
||||||
# This is inherited from the default settings
|
|
||||||
#topology: default
|
|
||||||
state:
|
|
||||||
times: 3
|
|
||||||
- agent_class: AggregatedCounter
|
|
||||||
topology: default
|
|
||||||
weight: 0.2
|
|
||||||
fixed:
|
|
||||||
- name: 'Environment Agent 1'
|
|
||||||
# All the other agents will assigned to the 'default' group
|
|
||||||
group: environment
|
|
||||||
# Do not count this agent towards total limits
|
|
||||||
hidden: true
|
|
||||||
agent_class: soil.BaseAgent
|
|
||||||
topology: null
|
|
||||||
state:
|
|
||||||
times: 10
|
|
||||||
- agent_class: CounterModel
|
|
||||||
topology: another_graph
|
|
||||||
id: 0
|
|
||||||
state:
|
|
||||||
times: 1
|
|
||||||
total: 0
|
|
||||||
- agent_class: CounterModel
|
|
||||||
topology: another_graph
|
|
||||||
id: 1
|
|
||||||
override:
|
|
||||||
# 2 agents that match this filter will be updated to match the state {times: 5}
|
|
||||||
- filter:
|
|
||||||
agent_class: AggregatedCounter
|
|
||||||
n: 2
|
|
||||||
state:
|
|
||||||
times: 5
|
|
||||||
@@ -2,11 +2,12 @@ from networkx import Graph
|
|||||||
import random
|
import random
|
||||||
import networkx as nx
|
import networkx as nx
|
||||||
|
|
||||||
|
|
||||||
def mygenerator(n=5, n_edges=5):
|
def mygenerator(n=5, n_edges=5):
|
||||||
'''
|
"""
|
||||||
Just a simple generator that creates a network with n nodes and
|
Just a simple generator that creates a network with n nodes and
|
||||||
n_edges edges. Edges are assigned randomly, only avoiding self loops.
|
n_edges edges. Edges are assigned randomly, only avoiding self loops.
|
||||||
'''
|
"""
|
||||||
G = nx.Graph()
|
G = nx.Graph()
|
||||||
|
|
||||||
for i in range(n):
|
for i in range(n):
|
||||||
@@ -19,9 +20,3 @@ def mygenerator(n=5, n_edges=5):
|
|||||||
n_out = random.choice(nodes)
|
n_out = random.choice(nodes)
|
||||||
G.add_edge(n_in, n_out)
|
G.add_edge(n_in, n_out)
|
||||||
return G
|
return G
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|||||||
@@ -2,33 +2,36 @@ from soil.agents import FSM, state, default_state
|
|||||||
|
|
||||||
|
|
||||||
class Fibonacci(FSM):
|
class Fibonacci(FSM):
|
||||||
'''Agent that only executes in t_steps that are Fibonacci numbers'''
|
"""Agent that only executes in t_steps that are Fibonacci numbers"""
|
||||||
|
|
||||||
defaults = {
|
defaults = {"prev": 1}
|
||||||
'prev': 1
|
|
||||||
}
|
|
||||||
|
|
||||||
@default_state
|
@default_state
|
||||||
@state
|
@state
|
||||||
def counting(self):
|
def counting(self):
|
||||||
self.log('Stopping at {}'.format(self.now))
|
self.log("Stopping at {}".format(self.now))
|
||||||
prev, self['prev'] = self['prev'], max([self.now, self['prev']])
|
prev, self["prev"] = self["prev"], max([self.now, self["prev"]])
|
||||||
return None, self.env.timeout(prev)
|
return None, self.env.timeout(prev)
|
||||||
|
|
||||||
|
|
||||||
class Odds(FSM):
|
class Odds(FSM):
|
||||||
'''Agent that only executes in odd t_steps'''
|
"""Agent that only executes in odd t_steps"""
|
||||||
|
|
||||||
@default_state
|
@default_state
|
||||||
@state
|
@state
|
||||||
def odds(self):
|
def odds(self):
|
||||||
self.log('Stopping at {}'.format(self.now))
|
self.log("Stopping at {}".format(self.now))
|
||||||
return None, self.env.timeout(1 + self.now % 2)
|
return None, self.env.timeout(1 + self.now % 2)
|
||||||
|
|
||||||
if __name__ == '__main__':
|
|
||||||
import logging
|
if __name__ == "__main__":
|
||||||
logging.basicConfig(level=logging.INFO)
|
|
||||||
from soil import Simulation
|
from soil import Simulation
|
||||||
s = Simulation(network_agents=[{'ids': [0], 'agent_class': Fibonacci},
|
|
||||||
{'ids': [1], 'agent_class': Odds}],
|
s = Simulation(
|
||||||
|
network_agents=[
|
||||||
|
{"ids": [0], "agent_class": Fibonacci},
|
||||||
|
{"ids": [1], "agent_class": Odds},
|
||||||
|
],
|
||||||
network_params={"generator": "complete_graph", "n": 2},
|
network_params={"generator": "complete_graph", "n": 2},
|
||||||
max_time=100,
|
max_time=100,
|
||||||
)
|
)
|
||||||
|
|||||||
7
examples/events_and_messages/README.md
Normal file
7
examples/events_and_messages/README.md
Normal file
@@ -0,0 +1,7 @@
|
|||||||
|
This example can be run like with command-line options, like this:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
python cars.py --level DEBUG -e summary --csv
|
||||||
|
```
|
||||||
|
|
||||||
|
This will set the `CSV` (save the agent and model data to a CSV) and `summary` (print the a summary of the data to stdout) exporters, and set the log level to DEBUG.
|
||||||
205
examples/events_and_messages/cars.py
Normal file
205
examples/events_and_messages/cars.py
Normal file
@@ -0,0 +1,205 @@
|
|||||||
|
"""
|
||||||
|
This is an example of a simplified city, where there are Passengers and Drivers that can take those passengers
|
||||||
|
from their location to their desired location.
|
||||||
|
|
||||||
|
An example scenario could play like the following:
|
||||||
|
|
||||||
|
- Drivers start in the `wandering` state, where they wander around the city until they have been assigned a journey
|
||||||
|
- Passenger(1) tells every driver that it wants to request a Journey.
|
||||||
|
- Each driver receives the request.
|
||||||
|
If Driver(2) is interested in providing the Journey, it asks Passenger(1) to confirm that it accepts Driver(2)'s request
|
||||||
|
- When Passenger(1) accepts the request, two things happen:
|
||||||
|
- Passenger(1) changes its state to `driving_home`
|
||||||
|
- Driver(2) starts moving towards the origin of the Journey
|
||||||
|
- Once Driver(2) reaches the origin, it starts moving itself and Passenger(1) to the destination of the Journey
|
||||||
|
- When Driver(2) reaches the destination (carrying Passenger(1) along):
|
||||||
|
- Driver(2) starts wondering again
|
||||||
|
- Passenger(1) dies, and is removed from the simulation
|
||||||
|
- If there are no more passengers available in the simulation, Drivers die
|
||||||
|
"""
|
||||||
|
from __future__ import annotations
|
||||||
|
from soil import *
|
||||||
|
from soil import events
|
||||||
|
from mesa.space import MultiGrid
|
||||||
|
|
||||||
|
|
||||||
|
# More complex scenarios may use more than one type of message between objects.
|
||||||
|
# A common pattern is to use `enum.Enum` to represent state changes in a request.
|
||||||
|
@dataclass
|
||||||
|
class Journey:
|
||||||
|
"""
|
||||||
|
This represents a request for a journey. Passengers and drivers exchange this object.
|
||||||
|
|
||||||
|
A journey may have a driver assigned or not. If the driver has not been assigned, this
|
||||||
|
object is considered a "request for a journey".
|
||||||
|
"""
|
||||||
|
origin: (int, int)
|
||||||
|
destination: (int, int)
|
||||||
|
tip: float
|
||||||
|
|
||||||
|
passenger: Passenger
|
||||||
|
driver: Driver = None
|
||||||
|
|
||||||
|
|
||||||
|
class City(EventedEnvironment):
|
||||||
|
"""
|
||||||
|
An environment with a grid where drivers and passengers will be placed.
|
||||||
|
|
||||||
|
The number of drivers and riders is configurable through its parameters:
|
||||||
|
|
||||||
|
:param str n_cars: The total number of drivers to add
|
||||||
|
:param str n_passengers: The number of passengers in the simulation
|
||||||
|
:param list agents: Specific agents to use in the simulation. It overrides the `n_passengers`
|
||||||
|
and `n_cars` params.
|
||||||
|
:param int height: Height of the internal grid
|
||||||
|
:param int width: Width of the internal grid
|
||||||
|
"""
|
||||||
|
def __init__(self, *args, n_cars=1, n_passengers=10,
|
||||||
|
height=100, width=100, agents=None,
|
||||||
|
model_reporters=None,
|
||||||
|
**kwargs):
|
||||||
|
self.grid = MultiGrid(width=width, height=height, torus=False)
|
||||||
|
if agents is None:
|
||||||
|
agents = []
|
||||||
|
for i in range(n_cars):
|
||||||
|
agents.append({'agent_class': Driver})
|
||||||
|
for i in range(n_passengers):
|
||||||
|
agents.append({'agent_class': Passenger})
|
||||||
|
model_reporters = model_reporters or {'earnings': 'total_earnings', 'n_passengers': 'number_passengers'}
|
||||||
|
print('REPORTERS', model_reporters)
|
||||||
|
super().__init__(*args, agents=agents, model_reporters=model_reporters, **kwargs)
|
||||||
|
for agent in self.agents:
|
||||||
|
self.grid.place_agent(agent, (0, 0))
|
||||||
|
self.grid.move_to_empty(agent)
|
||||||
|
|
||||||
|
@property
|
||||||
|
def total_earnings(self):
|
||||||
|
return sum(d.earnings for d in self.agents(agent_class=Driver))
|
||||||
|
|
||||||
|
@property
|
||||||
|
def number_passengers(self):
|
||||||
|
return self.count_agents(agent_class=Passenger)
|
||||||
|
|
||||||
|
|
||||||
|
class Driver(Evented, FSM):
|
||||||
|
pos = None
|
||||||
|
journey = None
|
||||||
|
earnings = 0
|
||||||
|
|
||||||
|
def on_receive(self, msg, sender):
|
||||||
|
'''This is not a state. It will run (and block) every time check_messages is invoked'''
|
||||||
|
if self.journey is None and isinstance(msg, Journey) and msg.driver is None:
|
||||||
|
msg.driver = self
|
||||||
|
self.journey = msg
|
||||||
|
|
||||||
|
def check_passengers(self):
|
||||||
|
'''If there are no more passengers, stop forever'''
|
||||||
|
c = self.count_agents(agent_class=Passenger)
|
||||||
|
self.info(f"Passengers left {c}")
|
||||||
|
if not c:
|
||||||
|
self.die()
|
||||||
|
|
||||||
|
@default_state
|
||||||
|
@state
|
||||||
|
def wandering(self):
|
||||||
|
'''Move around the city until a journey is accepted'''
|
||||||
|
target = None
|
||||||
|
self.check_passengers()
|
||||||
|
self.journey = None
|
||||||
|
while self.journey is None: # No potential journeys detected (see on_receive)
|
||||||
|
if target is None or not self.move_towards(target):
|
||||||
|
target = self.random.choice(self.model.grid.get_neighborhood(self.pos, moore=False))
|
||||||
|
|
||||||
|
self.check_passengers()
|
||||||
|
self.check_messages() # This will call on_receive behind the scenes, and the agent's status will be updated
|
||||||
|
yield Delta(30) # Wait at least 30 seconds before checking again
|
||||||
|
|
||||||
|
try:
|
||||||
|
# Re-send the journey to the passenger, to confirm that we have been selected
|
||||||
|
self.journey = yield self.journey.passenger.ask(self.journey, timeout=60)
|
||||||
|
except events.TimedOut:
|
||||||
|
# No journey has been accepted. Try again
|
||||||
|
self.journey = None
|
||||||
|
return
|
||||||
|
|
||||||
|
return self.driving
|
||||||
|
|
||||||
|
@state
|
||||||
|
def driving(self):
|
||||||
|
'''The journey has been accepted. Pick them up and take them to their destination'''
|
||||||
|
while self.move_towards(self.journey.origin):
|
||||||
|
yield
|
||||||
|
while self.move_towards(self.journey.destination, with_passenger=True):
|
||||||
|
yield
|
||||||
|
self.earnings += self.journey.tip
|
||||||
|
self.check_passengers()
|
||||||
|
return self.wandering
|
||||||
|
|
||||||
|
def move_towards(self, target, with_passenger=False):
|
||||||
|
'''Move one cell at a time towards a target'''
|
||||||
|
self.info(f"Moving { self.pos } -> { target }")
|
||||||
|
if target[0] == self.pos[0] and target[1] == self.pos[1]:
|
||||||
|
return False
|
||||||
|
|
||||||
|
next_pos = [self.pos[0], self.pos[1]]
|
||||||
|
for idx in [0, 1]:
|
||||||
|
if self.pos[idx] < target[idx]:
|
||||||
|
next_pos[idx] += 1
|
||||||
|
break
|
||||||
|
if self.pos[idx] > target[idx]:
|
||||||
|
next_pos[idx] -= 1
|
||||||
|
break
|
||||||
|
self.model.grid.move_agent(self, tuple(next_pos))
|
||||||
|
if with_passenger:
|
||||||
|
self.journey.passenger.pos = self.pos # This could be communicated through messages
|
||||||
|
return True
|
||||||
|
|
||||||
|
|
||||||
|
class Passenger(Evented, FSM):
|
||||||
|
pos = None
|
||||||
|
|
||||||
|
def on_receive(self, msg, sender):
|
||||||
|
'''This is not a state. It will be run synchronously every time `check_messages` is run'''
|
||||||
|
|
||||||
|
if isinstance(msg, Journey):
|
||||||
|
self.journey = msg
|
||||||
|
return msg
|
||||||
|
|
||||||
|
|
||||||
|
@default_state
|
||||||
|
@state
|
||||||
|
def asking(self):
|
||||||
|
destination = (self.random.randint(0, self.model.grid.height), self.random.randint(0, self.model.grid.width))
|
||||||
|
self.journey = None
|
||||||
|
journey = Journey(origin=self.pos,
|
||||||
|
destination=destination,
|
||||||
|
tip=self.random.randint(10, 100),
|
||||||
|
passenger=self)
|
||||||
|
|
||||||
|
timeout = 60
|
||||||
|
expiration = self.now + timeout
|
||||||
|
self.model.broadcast(journey, ttl=timeout, sender=self, agent_class=Driver)
|
||||||
|
while not self.journey:
|
||||||
|
self.info(f"Passenger at: { self.pos }. Checking for responses.")
|
||||||
|
try:
|
||||||
|
yield self.received(expiration=expiration)
|
||||||
|
except events.TimedOut:
|
||||||
|
self.info(f"Passenger at: { self.pos }. Asking for journey.")
|
||||||
|
self.model.broadcast(journey, ttl=timeout, sender=self, agent_class=Driver)
|
||||||
|
expiration = self.now + timeout
|
||||||
|
self.check_messages()
|
||||||
|
return self.driving_home
|
||||||
|
|
||||||
|
@state
|
||||||
|
def driving_home(self):
|
||||||
|
while self.pos[0] != self.journey.destination[0] or self.pos[1] != self.journey.destination[1]:
|
||||||
|
yield self.received(timeout=60)
|
||||||
|
self.info("Got home safe!")
|
||||||
|
self.die()
|
||||||
|
|
||||||
|
|
||||||
|
simulation = Simulation(name='RideHailing', model_class=City, model_params={'n_passengers': 2})
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
with easy(simulation) as s:
|
||||||
|
s.run()
|
||||||
@@ -8,17 +8,12 @@ interval: 1
|
|||||||
seed: '1'
|
seed: '1'
|
||||||
model_class: social_wealth.MoneyEnv
|
model_class: social_wealth.MoneyEnv
|
||||||
model_params:
|
model_params:
|
||||||
topologies:
|
|
||||||
default:
|
|
||||||
params:
|
|
||||||
generator: social_wealth.graph_generator
|
generator: social_wealth.graph_generator
|
||||||
n: 5
|
|
||||||
agents:
|
agents:
|
||||||
|
topology: true
|
||||||
distribution:
|
distribution:
|
||||||
- agent_class: social_wealth.SocialMoneyAgent
|
- agent_class: social_wealth.SocialMoneyAgent
|
||||||
topology: default
|
|
||||||
weight: 1
|
weight: 1
|
||||||
mesa_agent_class: social_wealth.MoneyAgent
|
|
||||||
N: 10
|
N: 10
|
||||||
width: 50
|
width: 50
|
||||||
height: 50
|
height: 50
|
||||||
|
|||||||
@@ -2,6 +2,7 @@ from mesa.visualization.ModularVisualization import ModularServer
|
|||||||
from soil.visualization import UserSettableParameter
|
from soil.visualization import UserSettableParameter
|
||||||
from mesa.visualization.modules import ChartModule, NetworkModule, CanvasGrid
|
from mesa.visualization.modules import ChartModule, NetworkModule, CanvasGrid
|
||||||
from social_wealth import MoneyEnv, graph_generator, SocialMoneyAgent
|
from social_wealth import MoneyEnv, graph_generator, SocialMoneyAgent
|
||||||
|
import networkx as nx
|
||||||
|
|
||||||
|
|
||||||
class MyNetwork(NetworkModule):
|
class MyNetwork(NetworkModule):
|
||||||
@@ -13,15 +14,18 @@ def network_portrayal(env):
|
|||||||
# The model ensures there is 0 or 1 agent per node
|
# The model ensures there is 0 or 1 agent per node
|
||||||
|
|
||||||
portrayal = dict()
|
portrayal = dict()
|
||||||
|
wealths = {
|
||||||
|
node_id: data["agent"].wealth for (node_id, data) in env.G.nodes(data=True)
|
||||||
|
}
|
||||||
portrayal["nodes"] = [
|
portrayal["nodes"] = [
|
||||||
{
|
{
|
||||||
"id": agent_id,
|
"id": node_id,
|
||||||
"size": env.get_agent(agent_id).wealth,
|
"size": 2 * (wealth + 1),
|
||||||
# "color": "#CC0000" if not agents or agents[0].wealth == 0 else "#007959",
|
"color": "#CC0000" if wealth == 0 else "#007959",
|
||||||
"color": "#CC0000",
|
# "color": "#CC0000",
|
||||||
"label": f"{agent_id}: {env.get_agent(agent_id).wealth}",
|
"label": f"{node_id}: {wealth}",
|
||||||
}
|
}
|
||||||
for (agent_id) in env.G.nodes
|
for (node_id, wealth) in wealths.items()
|
||||||
]
|
]
|
||||||
|
|
||||||
portrayal["edges"] = [
|
portrayal["edges"] = [
|
||||||
@@ -29,7 +33,6 @@ def network_portrayal(env):
|
|||||||
for edge_id, (source, target) in enumerate(env.G.edges)
|
for edge_id, (source, target) in enumerate(env.G.edges)
|
||||||
]
|
]
|
||||||
|
|
||||||
|
|
||||||
return portrayal
|
return portrayal
|
||||||
|
|
||||||
|
|
||||||
@@ -51,11 +54,11 @@ def gridPortrayal(agent):
|
|||||||
"Text": agent.unique_id,
|
"Text": agent.unique_id,
|
||||||
"x": agent.pos[0],
|
"x": agent.pos[0],
|
||||||
"y": agent.pos[1],
|
"y": agent.pos[1],
|
||||||
"Color": f"rgba(31, 10, 255, 0.{color})"
|
"Color": f"rgba(31, 10, 255, 0.{color})",
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
grid = MyNetwork(network_portrayal, 500, 500, library="sigma")
|
grid = MyNetwork(network_portrayal, 500, 500)
|
||||||
chart = ChartModule(
|
chart = ChartModule(
|
||||||
[{"Label": "Gini", "Color": "Black"}], data_collector_name="datacollector"
|
[{"Label": "Gini", "Color": "Black"}], data_collector_name="datacollector"
|
||||||
)
|
)
|
||||||
@@ -70,7 +73,6 @@ model_params = {
|
|||||||
1,
|
1,
|
||||||
description="Choose how many agents to include in the model",
|
description="Choose how many agents to include in the model",
|
||||||
),
|
),
|
||||||
"network_agents": [{"agent_class": SocialMoneyAgent}],
|
|
||||||
"height": UserSettableParameter(
|
"height": UserSettableParameter(
|
||||||
"slider",
|
"slider",
|
||||||
"height",
|
"height",
|
||||||
@@ -89,12 +91,19 @@ model_params = {
|
|||||||
1,
|
1,
|
||||||
description="Grid width",
|
description="Grid width",
|
||||||
),
|
),
|
||||||
"network_params": {
|
"agent_class": UserSettableParameter(
|
||||||
'generator': graph_generator
|
"choice",
|
||||||
},
|
"Agent class",
|
||||||
|
value="MoneyAgent",
|
||||||
|
choices=["MoneyAgent", "SocialMoneyAgent"],
|
||||||
|
),
|
||||||
|
"generator": graph_generator,
|
||||||
}
|
}
|
||||||
|
|
||||||
canvas_element = CanvasGrid(gridPortrayal, model_params["width"].value, model_params["height"].value, 500, 500)
|
|
||||||
|
canvas_element = CanvasGrid(
|
||||||
|
gridPortrayal, model_params["width"].value, model_params["height"].value, 500, 500
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
server = ModularServer(
|
server = ModularServer(
|
||||||
|
|||||||
@@ -1,23 +1,26 @@
|
|||||||
'''
|
"""
|
||||||
This is an example that adds soil agents and environment in a normal
|
This is an example that adds soil agents and environment in a normal
|
||||||
mesa workflow.
|
mesa workflow.
|
||||||
'''
|
"""
|
||||||
from mesa import Agent as MesaAgent
|
from mesa import Agent as MesaAgent
|
||||||
from mesa.space import MultiGrid
|
from mesa.space import MultiGrid
|
||||||
|
|
||||||
# from mesa.time import RandomActivation
|
# from mesa.time import RandomActivation
|
||||||
from mesa.datacollection import DataCollector
|
from mesa.datacollection import DataCollector
|
||||||
from mesa.batchrunner import BatchRunner
|
from mesa.batchrunner import BatchRunner
|
||||||
|
|
||||||
import networkx as nx
|
import networkx as nx
|
||||||
|
|
||||||
from soil import NetworkAgent, Environment
|
from soil import NetworkAgent, Environment, serialization
|
||||||
|
|
||||||
|
|
||||||
def compute_gini(model):
|
def compute_gini(model):
|
||||||
agent_wealths = [agent.wealth for agent in model.agents]
|
agent_wealths = [agent.wealth for agent in model.agents]
|
||||||
x = sorted(agent_wealths)
|
x = sorted(agent_wealths)
|
||||||
N = len(list(model.agents))
|
N = len(list(model.agents))
|
||||||
B = sum(xi * (N - i) for i, xi in enumerate(x)) / (N * sum(x))
|
B = sum(xi * (N - i) for i, xi in enumerate(x)) / (N * sum(x))
|
||||||
return (1 + (1/N) - 2*B)
|
return 1 + (1 / N) - 2 * B
|
||||||
|
|
||||||
|
|
||||||
class MoneyAgent(MesaAgent):
|
class MoneyAgent(MesaAgent):
|
||||||
"""
|
"""
|
||||||
@@ -25,15 +28,14 @@ class MoneyAgent(MesaAgent):
|
|||||||
It will only share wealth with neighbors based on grid proximity
|
It will only share wealth with neighbors based on grid proximity
|
||||||
"""
|
"""
|
||||||
|
|
||||||
def __init__(self, unique_id, model):
|
def __init__(self, unique_id, model, wealth=1):
|
||||||
super().__init__(unique_id=unique_id, model=model)
|
super().__init__(unique_id=unique_id, model=model)
|
||||||
self.wealth = 1
|
self.wealth = wealth
|
||||||
|
|
||||||
def move(self):
|
def move(self):
|
||||||
possible_steps = self.model.grid.get_neighborhood(
|
possible_steps = self.model.grid.get_neighborhood(
|
||||||
self.pos,
|
self.pos, moore=True, include_center=False
|
||||||
moore=True,
|
)
|
||||||
include_center=False)
|
|
||||||
new_position = self.random.choice(possible_steps)
|
new_position = self.random.choice(possible_steps)
|
||||||
self.model.grid.move_agent(self, new_position)
|
self.model.grid.move_agent(self, new_position)
|
||||||
|
|
||||||
@@ -45,7 +47,7 @@ class MoneyAgent(MesaAgent):
|
|||||||
self.wealth -= 1
|
self.wealth -= 1
|
||||||
|
|
||||||
def step(self):
|
def step(self):
|
||||||
self.info("Crying wolf", self.pos)
|
print("Crying wolf", self.pos)
|
||||||
self.move()
|
self.move()
|
||||||
if self.wealth > 0:
|
if self.wealth > 0:
|
||||||
self.give_money()
|
self.give_money()
|
||||||
@@ -56,10 +58,10 @@ class SocialMoneyAgent(NetworkAgent, MoneyAgent):
|
|||||||
|
|
||||||
def give_money(self):
|
def give_money(self):
|
||||||
cellmates = set(self.model.grid.get_cell_list_contents([self.pos]))
|
cellmates = set(self.model.grid.get_cell_list_contents([self.pos]))
|
||||||
friends = set(self.get_neighboring_agents())
|
friends = set(self.get_neighbors())
|
||||||
self.info("Trying to give money")
|
self.info("Trying to give money")
|
||||||
self.debug("Cellmates: ", cellmates)
|
self.info("Cellmates: ", cellmates)
|
||||||
self.debug("Friends: ", friends)
|
self.info("Friends: ", friends)
|
||||||
|
|
||||||
nearby_friends = list(cellmates & friends)
|
nearby_friends = list(cellmates & friends)
|
||||||
|
|
||||||
@@ -69,13 +71,35 @@ class SocialMoneyAgent(NetworkAgent, MoneyAgent):
|
|||||||
self.wealth -= 1
|
self.wealth -= 1
|
||||||
|
|
||||||
|
|
||||||
|
def graph_generator(n=5):
|
||||||
|
G = nx.Graph()
|
||||||
|
for ix in range(n):
|
||||||
|
G.add_edge(0, ix)
|
||||||
|
return G
|
||||||
|
|
||||||
|
|
||||||
class MoneyEnv(Environment):
|
class MoneyEnv(Environment):
|
||||||
"""A model with some number of agents."""
|
"""A model with some number of agents."""
|
||||||
def __init__(self, width, height, *args, topologies, **kwargs):
|
|
||||||
|
|
||||||
super().__init__(*args, topologies=topologies, **kwargs)
|
def __init__(
|
||||||
|
self,
|
||||||
|
width,
|
||||||
|
height,
|
||||||
|
N,
|
||||||
|
generator=graph_generator,
|
||||||
|
agent_class=SocialMoneyAgent,
|
||||||
|
topology=None,
|
||||||
|
**kwargs
|
||||||
|
):
|
||||||
|
|
||||||
|
generator = serialization.deserialize(generator)
|
||||||
|
agent_class = serialization.deserialize(agent_class, globs=globals())
|
||||||
|
topology = generator(n=N)
|
||||||
|
super().__init__(topology=topology, N=N, **kwargs)
|
||||||
self.grid = MultiGrid(width, height, False)
|
self.grid = MultiGrid(width, height, False)
|
||||||
|
|
||||||
|
self.populate_network(agent_class=agent_class)
|
||||||
|
|
||||||
# Create agents
|
# Create agents
|
||||||
for agent in self.agents:
|
for agent in self.agents:
|
||||||
x = self.random.randrange(self.grid.width)
|
x = self.random.randrange(self.grid.width)
|
||||||
@@ -83,37 +107,31 @@ class MoneyEnv(Environment):
|
|||||||
self.grid.place_agent(agent, (x, y))
|
self.grid.place_agent(agent, (x, y))
|
||||||
|
|
||||||
self.datacollector = DataCollector(
|
self.datacollector = DataCollector(
|
||||||
model_reporters={"Gini": compute_gini},
|
model_reporters={"Gini": compute_gini}, agent_reporters={"Wealth": "wealth"}
|
||||||
agent_reporters={"Wealth": "wealth"})
|
)
|
||||||
|
|
||||||
|
|
||||||
def graph_generator(n=5):
|
if __name__ == "__main__":
|
||||||
G = nx.Graph()
|
|
||||||
for ix in range(n):
|
|
||||||
G.add_edge(0, ix)
|
|
||||||
return G
|
|
||||||
|
|
||||||
if __name__ == '__main__':
|
fixed_params = {
|
||||||
|
"generator": nx.complete_graph,
|
||||||
|
|
||||||
G = graph_generator()
|
|
||||||
fixed_params = {"topology": G,
|
|
||||||
"width": 10,
|
"width": 10,
|
||||||
"network_agents": [{"agent_class": SocialMoneyAgent,
|
"network_agents": [{"agent_class": SocialMoneyAgent, "weight": 1}],
|
||||||
'weight': 1}],
|
"height": 10,
|
||||||
"height": 10}
|
}
|
||||||
|
|
||||||
variable_params = {"N": range(10, 100, 10)}
|
variable_params = {"N": range(10, 100, 10)}
|
||||||
|
|
||||||
batch_run = BatchRunner(MoneyEnv,
|
batch_run = BatchRunner(
|
||||||
|
MoneyEnv,
|
||||||
variable_parameters=variable_params,
|
variable_parameters=variable_params,
|
||||||
fixed_parameters=fixed_params,
|
fixed_parameters=fixed_params,
|
||||||
iterations=5,
|
iterations=5,
|
||||||
max_steps=100,
|
max_steps=100,
|
||||||
model_reporters={"Gini": compute_gini})
|
model_reporters={"Gini": compute_gini},
|
||||||
|
)
|
||||||
batch_run.run_all()
|
batch_run.run_all()
|
||||||
|
|
||||||
run_data = batch_run.get_model_vars_dataframe()
|
run_data = batch_run.get_model_vars_dataframe()
|
||||||
run_data.head()
|
run_data.head()
|
||||||
print(run_data.Gini)
|
print(run_data.Gini)
|
||||||
|
|
||||||
|
|||||||
@@ -4,24 +4,26 @@ from mesa.time import RandomActivation
|
|||||||
from mesa.datacollection import DataCollector
|
from mesa.datacollection import DataCollector
|
||||||
from mesa.batchrunner import BatchRunner
|
from mesa.batchrunner import BatchRunner
|
||||||
|
|
||||||
|
|
||||||
def compute_gini(model):
|
def compute_gini(model):
|
||||||
agent_wealths = [agent.wealth for agent in model.schedule.agents]
|
agent_wealths = [agent.wealth for agent in model.schedule.agents]
|
||||||
x = sorted(agent_wealths)
|
x = sorted(agent_wealths)
|
||||||
N = model.num_agents
|
N = model.num_agents
|
||||||
B = sum(xi * (N - i) for i, xi in enumerate(x)) / (N * sum(x))
|
B = sum(xi * (N - i) for i, xi in enumerate(x)) / (N * sum(x))
|
||||||
return (1 + (1/N) - 2*B)
|
return 1 + (1 / N) - 2 * B
|
||||||
|
|
||||||
|
|
||||||
class MoneyAgent(Agent):
|
class MoneyAgent(Agent):
|
||||||
"""An agent with fixed initial wealth."""
|
"""An agent with fixed initial wealth."""
|
||||||
|
|
||||||
def __init__(self, unique_id, model):
|
def __init__(self, unique_id, model):
|
||||||
super().__init__(unique_id, model)
|
super().__init__(unique_id, model)
|
||||||
self.wealth = 1
|
self.wealth = 1
|
||||||
|
|
||||||
def move(self):
|
def move(self):
|
||||||
possible_steps = self.model.grid.get_neighborhood(
|
possible_steps = self.model.grid.get_neighborhood(
|
||||||
self.pos,
|
self.pos, moore=True, include_center=False
|
||||||
moore=True,
|
)
|
||||||
include_center=False)
|
|
||||||
new_position = self.random.choice(possible_steps)
|
new_position = self.random.choice(possible_steps)
|
||||||
self.model.grid.move_agent(self, new_position)
|
self.model.grid.move_agent(self, new_position)
|
||||||
|
|
||||||
@@ -37,8 +39,10 @@ class MoneyAgent(Agent):
|
|||||||
if self.wealth > 0:
|
if self.wealth > 0:
|
||||||
self.give_money()
|
self.give_money()
|
||||||
|
|
||||||
|
|
||||||
class MoneyModel(Model):
|
class MoneyModel(Model):
|
||||||
"""A model with some number of agents."""
|
"""A model with some number of agents."""
|
||||||
|
|
||||||
def __init__(self, N, width, height):
|
def __init__(self, N, width, height):
|
||||||
self.num_agents = N
|
self.num_agents = N
|
||||||
self.grid = MultiGrid(width, height, True)
|
self.grid = MultiGrid(width, height, True)
|
||||||
@@ -55,29 +59,29 @@ class MoneyModel(Model):
|
|||||||
self.grid.place_agent(a, (x, y))
|
self.grid.place_agent(a, (x, y))
|
||||||
|
|
||||||
self.datacollector = DataCollector(
|
self.datacollector = DataCollector(
|
||||||
model_reporters={"Gini": compute_gini},
|
model_reporters={"Gini": compute_gini}, agent_reporters={"Wealth": "wealth"}
|
||||||
agent_reporters={"Wealth": "wealth"})
|
)
|
||||||
|
|
||||||
def step(self):
|
def step(self):
|
||||||
self.datacollector.collect(self)
|
self.datacollector.collect(self)
|
||||||
self.schedule.step()
|
self.schedule.step()
|
||||||
|
|
||||||
|
|
||||||
if __name__ == '__main__':
|
if __name__ == "__main__":
|
||||||
|
|
||||||
fixed_params = {"width": 10,
|
fixed_params = {"width": 10, "height": 10}
|
||||||
"height": 10}
|
|
||||||
variable_params = {"N": range(10, 500, 10)}
|
variable_params = {"N": range(10, 500, 10)}
|
||||||
|
|
||||||
batch_run = BatchRunner(MoneyModel,
|
batch_run = BatchRunner(
|
||||||
|
MoneyModel,
|
||||||
variable_params,
|
variable_params,
|
||||||
fixed_params,
|
fixed_params,
|
||||||
iterations=5,
|
iterations=5,
|
||||||
max_steps=100,
|
max_steps=100,
|
||||||
model_reporters={"Gini": compute_gini})
|
model_reporters={"Gini": compute_gini},
|
||||||
|
)
|
||||||
batch_run.run_all()
|
batch_run.run_all()
|
||||||
|
|
||||||
run_data = batch_run.get_model_vars_dataframe()
|
run_data = batch_run.get_model_vars_dataframe()
|
||||||
run_data.head()
|
run_data.head()
|
||||||
print(run_data.Gini)
|
print(run_data.Gini)
|
||||||
|
|
||||||
|
|||||||
@@ -3,84 +3,85 @@ import logging
|
|||||||
|
|
||||||
|
|
||||||
class DumbViewer(FSM, NetworkAgent):
|
class DumbViewer(FSM, NetworkAgent):
|
||||||
'''
|
"""
|
||||||
A viewer that gets infected via TV (if it has one) and tries to infect
|
A viewer that gets infected via TV (if it has one) and tries to infect
|
||||||
its neighbors once it's infected.
|
its neighbors once it's infected.
|
||||||
'''
|
"""
|
||||||
defaults = {
|
|
||||||
'prob_neighbor_spread': 0.5,
|
prob_neighbor_spread = 0.5
|
||||||
'prob_tv_spread': 0.1,
|
prob_tv_spread = 0.1
|
||||||
}
|
has_been_infected = False
|
||||||
|
|
||||||
@default_state
|
@default_state
|
||||||
@state
|
@state
|
||||||
def neutral(self):
|
def neutral(self):
|
||||||
if self['has_tv']:
|
if self["has_tv"]:
|
||||||
if self.prob(self.model['prob_tv_spread']):
|
if self.prob(self.model["prob_tv_spread"]):
|
||||||
|
return self.infected
|
||||||
|
if self.has_been_infected:
|
||||||
return self.infected
|
return self.infected
|
||||||
|
|
||||||
@state
|
@state
|
||||||
def infected(self):
|
def infected(self):
|
||||||
for neighbor in self.get_neighboring_agents(state_id=self.neutral.id):
|
for neighbor in self.get_neighbors(state_id=self.neutral.id):
|
||||||
if self.prob(self.model['prob_neighbor_spread']):
|
if self.prob(self.model["prob_neighbor_spread"]):
|
||||||
neighbor.infect()
|
neighbor.infect()
|
||||||
|
|
||||||
def infect(self):
|
def infect(self):
|
||||||
'''
|
"""
|
||||||
This is not a state. It is a function that other agents can use to try to
|
This is not a state. It is a function that other agents can use to try to
|
||||||
infect this agent. DumbViewer always gets infected, but other agents like
|
infect this agent. DumbViewer always gets infected, but other agents like
|
||||||
HerdViewer might not become infected right away
|
HerdViewer might not become infected right away
|
||||||
'''
|
"""
|
||||||
|
|
||||||
self.set_state(self.infected)
|
self.has_been_infected = True
|
||||||
|
|
||||||
|
|
||||||
class HerdViewer(DumbViewer):
|
class HerdViewer(DumbViewer):
|
||||||
'''
|
"""
|
||||||
A viewer whose probability of infection depends on the state of its neighbors.
|
A viewer whose probability of infection depends on the state of its neighbors.
|
||||||
'''
|
"""
|
||||||
|
|
||||||
def infect(self):
|
def infect(self):
|
||||||
'''Notice again that this is NOT a state. See DumbViewer.infect for reference'''
|
"""Notice again that this is NOT a state. See DumbViewer.infect for reference"""
|
||||||
infected = self.count_neighboring_agents(state_id=self.infected.id)
|
infected = self.count_neighbors(state_id=self.infected.id)
|
||||||
total = self.count_neighboring_agents()
|
total = self.count_neighbors()
|
||||||
prob_infect = self.model['prob_neighbor_spread'] * infected/total
|
prob_infect = self.model["prob_neighbor_spread"] * infected / total
|
||||||
self.debug('prob_infect', prob_infect)
|
self.debug("prob_infect", prob_infect)
|
||||||
if self.prob(prob_infect):
|
if self.prob(prob_infect):
|
||||||
self.set_state(self.infected)
|
self.has_been_infected = True
|
||||||
|
|
||||||
|
|
||||||
class WiseViewer(HerdViewer):
|
class WiseViewer(HerdViewer):
|
||||||
'''
|
"""
|
||||||
A viewer that can change its mind.
|
A viewer that can change its mind.
|
||||||
'''
|
"""
|
||||||
|
|
||||||
defaults = {
|
defaults = {
|
||||||
'prob_neighbor_spread': 0.5,
|
"prob_neighbor_spread": 0.5,
|
||||||
'prob_neighbor_cure': 0.25,
|
"prob_neighbor_cure": 0.25,
|
||||||
'prob_tv_spread': 0.1,
|
"prob_tv_spread": 0.1,
|
||||||
}
|
}
|
||||||
|
|
||||||
@state
|
@state
|
||||||
def cured(self):
|
def cured(self):
|
||||||
prob_cure = self.model['prob_neighbor_cure']
|
prob_cure = self.model["prob_neighbor_cure"]
|
||||||
for neighbor in self.get_neighboring_agents(state_id=self.infected.id):
|
for neighbor in self.get_neighbors(state_id=self.infected.id):
|
||||||
if self.prob(prob_cure):
|
if self.prob(prob_cure):
|
||||||
try:
|
try:
|
||||||
neighbor.cure()
|
neighbor.cure()
|
||||||
except AttributeError:
|
except AttributeError:
|
||||||
self.debug('Viewer {} cannot be cured'.format(neighbor.id))
|
self.debug("Viewer {} cannot be cured".format(neighbor.id))
|
||||||
|
|
||||||
def cure(self):
|
def cure(self):
|
||||||
self.set_state(self.cured.id)
|
self.has_been_cured = True
|
||||||
|
|
||||||
@state
|
@state
|
||||||
def infected(self):
|
def infected(self):
|
||||||
cured = max(self.count_neighboring_agents(self.cured.id),
|
if self.has_been_cured:
|
||||||
1.0)
|
return self.cured
|
||||||
infected = max(self.count_neighboring_agents(self.infected.id),
|
cured = max(self.count_neighbors(self.cured.id), 1.0)
|
||||||
1.0)
|
infected = max(self.count_neighbors(self.infected.id), 1.0)
|
||||||
prob_cure = self.model['prob_neighbor_cure'] * (cured/infected)
|
prob_cure = self.model["prob_neighbor_cure"] * (cured / infected)
|
||||||
if self.prob(prob_cure):
|
if self.prob(prob_cure):
|
||||||
return self.cured
|
return self.cured
|
||||||
return self.set_state(super().infected)
|
|
||||||
|
|||||||
@@ -1,6 +1,6 @@
|
|||||||
'''
|
"""
|
||||||
Example of a fully programmatic simulation, without definition files.
|
Example of a fully programmatic simulation, without definition files.
|
||||||
'''
|
"""
|
||||||
from soil import Simulation, agents
|
from soil import Simulation, agents
|
||||||
from networkx import Graph
|
from networkx import Graph
|
||||||
import logging
|
import logging
|
||||||
@@ -14,21 +14,22 @@ def mygenerator():
|
|||||||
|
|
||||||
|
|
||||||
class MyAgent(agents.FSM):
|
class MyAgent(agents.FSM):
|
||||||
|
|
||||||
@agents.default_state
|
@agents.default_state
|
||||||
@agents.state
|
@agents.state
|
||||||
def neutral(self):
|
def neutral(self):
|
||||||
self.debug('I am running')
|
self.debug("I am running")
|
||||||
if agents.prob(0.2):
|
if agents.prob(0.2):
|
||||||
self.info('This runs 2/10 times on average')
|
self.info("This runs 2/10 times on average")
|
||||||
|
|
||||||
|
|
||||||
s = Simulation(name='Programmatic',
|
s = Simulation(
|
||||||
network_params={'generator': mygenerator},
|
name="Programmatic",
|
||||||
|
network_params={"generator": mygenerator},
|
||||||
num_trials=1,
|
num_trials=1,
|
||||||
max_time=100,
|
max_time=100,
|
||||||
agent_class=MyAgent,
|
agent_class=MyAgent,
|
||||||
dry_run=True)
|
dry_run=True,
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
# By default, logging will only print WARNING logs (and above).
|
# By default, logging will only print WARNING logs (and above).
|
||||||
|
|||||||
@@ -5,7 +5,8 @@ import logging
|
|||||||
|
|
||||||
|
|
||||||
class CityPubs(Environment):
|
class CityPubs(Environment):
|
||||||
'''Environment with Pubs'''
|
"""Environment with Pubs"""
|
||||||
|
|
||||||
level = logging.INFO
|
level = logging.INFO
|
||||||
|
|
||||||
def __init__(self, *args, number_of_pubs=3, pub_capacity=10, **kwargs):
|
def __init__(self, *args, number_of_pubs=3, pub_capacity=10, **kwargs):
|
||||||
@@ -13,68 +14,70 @@ class CityPubs(Environment):
|
|||||||
pubs = {}
|
pubs = {}
|
||||||
for i in range(number_of_pubs):
|
for i in range(number_of_pubs):
|
||||||
newpub = {
|
newpub = {
|
||||||
'name': 'The awesome pub #{}'.format(i),
|
"name": "The awesome pub #{}".format(i),
|
||||||
'open': True,
|
"open": True,
|
||||||
'capacity': pub_capacity,
|
"capacity": pub_capacity,
|
||||||
'occupancy': 0,
|
"occupancy": 0,
|
||||||
}
|
}
|
||||||
pubs[newpub['name']] = newpub
|
pubs[newpub["name"]] = newpub
|
||||||
self['pubs'] = pubs
|
self["pubs"] = pubs
|
||||||
|
|
||||||
def enter(self, pub_id, *nodes):
|
def enter(self, pub_id, *nodes):
|
||||||
'''Agents will try to enter. The pub checks if it is possible'''
|
"""Agents will try to enter. The pub checks if it is possible"""
|
||||||
try:
|
try:
|
||||||
pub = self['pubs'][pub_id]
|
pub = self["pubs"][pub_id]
|
||||||
except KeyError:
|
except KeyError:
|
||||||
raise ValueError('Pub {} is not available'.format(pub_id))
|
raise ValueError("Pub {} is not available".format(pub_id))
|
||||||
if not pub['open'] or (pub['capacity'] < (len(nodes) + pub['occupancy'])):
|
if not pub["open"] or (pub["capacity"] < (len(nodes) + pub["occupancy"])):
|
||||||
return False
|
return False
|
||||||
pub['occupancy'] += len(nodes)
|
pub["occupancy"] += len(nodes)
|
||||||
for node in nodes:
|
for node in nodes:
|
||||||
node['pub'] = pub_id
|
node["pub"] = pub_id
|
||||||
return True
|
return True
|
||||||
|
|
||||||
def available_pubs(self):
|
def available_pubs(self):
|
||||||
for pub in self['pubs'].values():
|
for pub in self["pubs"].values():
|
||||||
if pub['open'] and (pub['occupancy'] < pub['capacity']):
|
if pub["open"] and (pub["occupancy"] < pub["capacity"]):
|
||||||
yield pub['name']
|
yield pub["name"]
|
||||||
|
|
||||||
def exit(self, pub_id, *node_ids):
|
def exit(self, pub_id, *node_ids):
|
||||||
'''Agents will notify the pub they want to leave'''
|
"""Agents will notify the pub they want to leave"""
|
||||||
try:
|
try:
|
||||||
pub = self['pubs'][pub_id]
|
pub = self["pubs"][pub_id]
|
||||||
except KeyError:
|
except KeyError:
|
||||||
raise ValueError('Pub {} is not available'.format(pub_id))
|
raise ValueError("Pub {} is not available".format(pub_id))
|
||||||
for node_id in node_ids:
|
for node_id in node_ids:
|
||||||
node = self.get_agent(node_id)
|
node = self.get_agent(node_id)
|
||||||
if pub_id == node['pub']:
|
if pub_id == node["pub"]:
|
||||||
del node['pub']
|
del node["pub"]
|
||||||
pub['occupancy'] -= 1
|
pub["occupancy"] -= 1
|
||||||
|
|
||||||
|
|
||||||
class Patron(FSM, NetworkAgent):
|
class Patron(FSM, NetworkAgent):
|
||||||
'''Agent that looks for friends to drink with. It will do three things:
|
"""Agent that looks for friends to drink with. It will do three things:
|
||||||
1) Look for other patrons to drink with
|
1) Look for other patrons to drink with
|
||||||
2) Look for a bar where the agent and other agents in the same group can get in.
|
2) Look for a bar where the agent and other agents in the same group can get in.
|
||||||
3) While in the bar, patrons only drink, until they get drunk and taken home.
|
3) While in the bar, patrons only drink, until they get drunk and taken home.
|
||||||
'''
|
"""
|
||||||
|
|
||||||
level = logging.DEBUG
|
level = logging.DEBUG
|
||||||
|
|
||||||
pub = None
|
pub = None
|
||||||
drunk = False
|
drunk = False
|
||||||
pints = 0
|
pints = 0
|
||||||
max_pints = 3
|
max_pints = 3
|
||||||
|
kicked_out = False
|
||||||
|
|
||||||
@default_state
|
@default_state
|
||||||
@state
|
@state
|
||||||
def looking_for_friends(self):
|
def looking_for_friends(self):
|
||||||
'''Look for friends to drink with'''
|
"""Look for friends to drink with"""
|
||||||
self.info('I am looking for friends')
|
self.info("I am looking for friends")
|
||||||
available_friends = list(self.get_agents(drunk=False,
|
available_friends = list(
|
||||||
pub=None,
|
self.get_agents(drunk=False, pub=None, state_id=self.looking_for_friends.id)
|
||||||
state_id=self.looking_for_friends.id))
|
)
|
||||||
if not available_friends:
|
if not available_friends:
|
||||||
self.info('Life sucks and I\'m alone!')
|
self.info("Life sucks and I'm alone!")
|
||||||
return self.at_home
|
return self.at_home
|
||||||
befriended = self.try_friends(available_friends)
|
befriended = self.try_friends(available_friends)
|
||||||
if befriended:
|
if befriended:
|
||||||
@@ -82,91 +85,91 @@ class Patron(FSM, NetworkAgent):
|
|||||||
|
|
||||||
@state
|
@state
|
||||||
def looking_for_pub(self):
|
def looking_for_pub(self):
|
||||||
'''Look for a pub that accepts me and my friends'''
|
"""Look for a pub that accepts me and my friends"""
|
||||||
if self['pub'] != None:
|
if self["pub"] != None:
|
||||||
return self.sober_in_pub
|
return self.sober_in_pub
|
||||||
self.debug('I am looking for a pub')
|
self.debug("I am looking for a pub")
|
||||||
group = list(self.get_neighboring_agents())
|
group = list(self.get_neighbors())
|
||||||
for pub in self.model.available_pubs():
|
for pub in self.model.available_pubs():
|
||||||
self.debug('We\'re trying to get into {}: total: {}'.format(pub, len(group)))
|
self.debug("We're trying to get into {}: total: {}".format(pub, len(group)))
|
||||||
if self.model.enter(pub, self, *group):
|
if self.model.enter(pub, self, *group):
|
||||||
self.info('We\'re all {} getting in {}!'.format(len(group), pub))
|
self.info("We're all {} getting in {}!".format(len(group), pub))
|
||||||
return self.sober_in_pub
|
return self.sober_in_pub
|
||||||
|
|
||||||
@state
|
@state
|
||||||
def sober_in_pub(self):
|
def sober_in_pub(self):
|
||||||
'''Drink up.'''
|
"""Drink up."""
|
||||||
self.drink()
|
self.drink()
|
||||||
if self['pints'] > self['max_pints']:
|
if self["pints"] > self["max_pints"]:
|
||||||
return self.drunk_in_pub
|
return self.drunk_in_pub
|
||||||
|
|
||||||
@state
|
@state
|
||||||
def drunk_in_pub(self):
|
def drunk_in_pub(self):
|
||||||
'''I'm out. Take me home!'''
|
"""I'm out. Take me home!"""
|
||||||
self.info('I\'m so drunk. Take me home!')
|
self.info("I'm so drunk. Take me home!")
|
||||||
self['drunk'] = True
|
self["drunk"] = True
|
||||||
pass # out drunk
|
if self.kicked_out:
|
||||||
|
return self.at_home
|
||||||
|
pass # out drun
|
||||||
|
|
||||||
@state
|
@state
|
||||||
def at_home(self):
|
def at_home(self):
|
||||||
'''The end'''
|
"""The end"""
|
||||||
others = self.get_agents(state_id=Patron.at_home.id, limit_neighbors=True)
|
others = self.get_agents(state_id=Patron.at_home.id, limit_neighbors=True)
|
||||||
self.debug('I\'m home. Just like {} of my friends'.format(len(others)))
|
self.debug("I'm home. Just like {} of my friends".format(len(others)))
|
||||||
|
|
||||||
def drink(self):
|
def drink(self):
|
||||||
self['pints'] += 1
|
self["pints"] += 1
|
||||||
self.debug('Cheers to that')
|
self.debug("Cheers to that")
|
||||||
|
|
||||||
def kick_out(self):
|
def kick_out(self):
|
||||||
self.set_state(self.at_home)
|
self.kicked_out = True
|
||||||
|
|
||||||
def befriend(self, other_agent, force=False):
|
def befriend(self, other_agent, force=False):
|
||||||
'''
|
"""
|
||||||
Try to become friends with another agent. The chances of
|
Try to become friends with another agent. The chances of
|
||||||
success depend on both agents' openness.
|
success depend on both agents' openness.
|
||||||
'''
|
"""
|
||||||
if force or self['openness'] > self.random.random():
|
if force or self["openness"] > self.random.random():
|
||||||
self.model.add_edge(self, other_agent)
|
self.add_edge(self, other_agent)
|
||||||
self.info('Made some friend {}'.format(other_agent))
|
self.info("Made some friend {}".format(other_agent))
|
||||||
return True
|
return True
|
||||||
return False
|
return False
|
||||||
|
|
||||||
def try_friends(self, others):
|
def try_friends(self, others):
|
||||||
''' Look for random agents around me and try to befriend them'''
|
"""Look for random agents around me and try to befriend them"""
|
||||||
befriended = False
|
befriended = False
|
||||||
k = int(10*self['openness'])
|
k = int(10 * self["openness"])
|
||||||
self.random.shuffle(others)
|
self.random.shuffle(others)
|
||||||
for friend in islice(others, k): # random.choice >= 3.7
|
for friend in islice(others, k): # random.choice >= 3.7
|
||||||
if friend == self:
|
if friend == self:
|
||||||
continue
|
continue
|
||||||
if friend.befriend(self):
|
if friend.befriend(self):
|
||||||
self.befriend(friend, force=True)
|
self.befriend(friend, force=True)
|
||||||
self.debug('Hooray! new friend: {}'.format(friend.id))
|
self.debug("Hooray! new friend: {}".format(friend.id))
|
||||||
befriended = True
|
befriended = True
|
||||||
else:
|
else:
|
||||||
self.debug('{} does not want to be friends'.format(friend.id))
|
self.debug("{} does not want to be friends".format(friend.id))
|
||||||
return befriended
|
return befriended
|
||||||
|
|
||||||
|
|
||||||
class Police(FSM):
|
class Police(FSM):
|
||||||
'''Simple agent to take drunk people out of pubs.'''
|
"""Simple agent to take drunk people out of pubs."""
|
||||||
|
|
||||||
level = logging.INFO
|
level = logging.INFO
|
||||||
|
|
||||||
@default_state
|
@default_state
|
||||||
@state
|
@state
|
||||||
def patrol(self):
|
def patrol(self):
|
||||||
drunksters = list(self.get_agents(drunk=True,
|
drunksters = list(self.get_agents(drunk=True, state_id=Patron.drunk_in_pub.id))
|
||||||
state_id=Patron.drunk_in_pub.id))
|
|
||||||
for drunk in drunksters:
|
for drunk in drunksters:
|
||||||
self.info('Kicking out the trash: {}'.format(drunk.id))
|
self.info("Kicking out the trash: {}".format(drunk.id))
|
||||||
drunk.kick_out()
|
drunk.kick_out()
|
||||||
else:
|
else:
|
||||||
self.info('No trash to take out. Too bad.')
|
self.info("No trash to take out. Too bad.")
|
||||||
|
|
||||||
|
|
||||||
if __name__ == '__main__':
|
if __name__ == "__main__":
|
||||||
from soil import simulation
|
from soil import simulation
|
||||||
simulation.run_from_config('pubcrawl.yml',
|
|
||||||
dry_run=True,
|
simulation.run_from_config("pubcrawl.yml", dry_run=True, dump=None, parallel=False)
|
||||||
dump=None,
|
|
||||||
parallel=False)
|
|
||||||
|
|||||||
@@ -2,3 +2,13 @@ There are two similar implementations of this simulation.
|
|||||||
|
|
||||||
- `basic`. Using simple primites
|
- `basic`. Using simple primites
|
||||||
- `improved`. Using more advanced features such as the `time` module to avoid unnecessary computations (i.e., skip steps), and generator functions.
|
- `improved`. Using more advanced features such as the `time` module to avoid unnecessary computations (i.e., skip steps), and generator functions.
|
||||||
|
|
||||||
|
The examples can be run directly in the terminal, and they accept command like arguments.
|
||||||
|
For example, to enable the CSV exporter and the Summary exporter, while setting `max_time` to `100` and `seed` to `CustomSeed`:
|
||||||
|
|
||||||
|
```
|
||||||
|
python rabbit_agents.py --set max_time=100 --csv -e summary --set 'seed="CustomSeed"'
|
||||||
|
```
|
||||||
|
|
||||||
|
To learn more about how this functionality works, check out the `soil.easy` function.
|
||||||
|
|
||||||
|
|||||||
@@ -1,12 +1,24 @@
|
|||||||
from soil.agents import FSM, state, default_state, BaseAgent, NetworkAgent
|
from soil import FSM, state, default_state, BaseAgent, NetworkAgent, Environment
|
||||||
from soil.time import Delta
|
|
||||||
from enum import Enum
|
|
||||||
from collections import Counter
|
from collections import Counter
|
||||||
import logging
|
import logging
|
||||||
import math
|
import math
|
||||||
|
|
||||||
|
|
||||||
class RabbitModel(FSM, NetworkAgent):
|
class RabbitEnv(Environment):
|
||||||
|
@property
|
||||||
|
def num_rabbits(self):
|
||||||
|
return self.count_agents(agent_class=Rabbit)
|
||||||
|
|
||||||
|
@property
|
||||||
|
def num_males(self):
|
||||||
|
return self.count_agents(agent_class=Male)
|
||||||
|
|
||||||
|
@property
|
||||||
|
def num_females(self):
|
||||||
|
return self.count_agents(agent_class=Female)
|
||||||
|
|
||||||
|
|
||||||
|
class Rabbit(NetworkAgent, FSM):
|
||||||
|
|
||||||
sexual_maturity = 30
|
sexual_maturity = 30
|
||||||
life_expectancy = 300
|
life_expectancy = 300
|
||||||
@@ -14,7 +26,7 @@ class RabbitModel(FSM, NetworkAgent):
|
|||||||
@default_state
|
@default_state
|
||||||
@state
|
@state
|
||||||
def newborn(self):
|
def newborn(self):
|
||||||
self.info('I am a newborn.')
|
self.info("I am a newborn.")
|
||||||
self.age = 0
|
self.age = 0
|
||||||
self.offspring = 0
|
self.offspring = 0
|
||||||
return self.youngling
|
return self.youngling
|
||||||
@@ -23,7 +35,7 @@ class RabbitModel(FSM, NetworkAgent):
|
|||||||
def youngling(self):
|
def youngling(self):
|
||||||
self.age += 1
|
self.age += 1
|
||||||
if self.age >= self.sexual_maturity:
|
if self.age >= self.sexual_maturity:
|
||||||
self.info(f'I am fertile! My age is {self.age}')
|
self.info(f"I am fertile! My age is {self.age}")
|
||||||
return self.fertile
|
return self.fertile
|
||||||
|
|
||||||
@state
|
@state
|
||||||
@@ -35,7 +47,7 @@ class RabbitModel(FSM, NetworkAgent):
|
|||||||
self.die()
|
self.die()
|
||||||
|
|
||||||
|
|
||||||
class Male(RabbitModel):
|
class Male(Rabbit):
|
||||||
max_females = 5
|
max_females = 5
|
||||||
mating_prob = 0.001
|
mating_prob = 0.001
|
||||||
|
|
||||||
@@ -47,17 +59,18 @@ class Male(RabbitModel):
|
|||||||
return self.dead
|
return self.dead
|
||||||
|
|
||||||
# Males try to mate
|
# Males try to mate
|
||||||
for f in self.model.agents(agent_class=Female,
|
for f in self.model.agents(
|
||||||
state_id=Female.fertile.id,
|
agent_class=Female, state_id=Female.fertile.id, limit=self.max_females
|
||||||
limit=self.max_females):
|
):
|
||||||
self.debug('FOUND A FEMALE: ', repr(f), self.mating_prob)
|
self.debug("FOUND A FEMALE: ", repr(f), self.mating_prob)
|
||||||
if self.prob(self['mating_prob']):
|
if self.prob(self["mating_prob"]):
|
||||||
f.impregnate(self)
|
f.impregnate(self)
|
||||||
break # Take a break
|
break # Take a break
|
||||||
|
|
||||||
|
|
||||||
class Female(RabbitModel):
|
class Female(Rabbit):
|
||||||
gestation = 100
|
gestation = 10
|
||||||
|
pregnancy = -1
|
||||||
|
|
||||||
@state
|
@state
|
||||||
def fertile(self):
|
def fertile(self):
|
||||||
@@ -65,66 +78,73 @@ class Female(RabbitModel):
|
|||||||
self.age += 1
|
self.age += 1
|
||||||
if self.age > self.life_expectancy:
|
if self.age > self.life_expectancy:
|
||||||
return self.dead
|
return self.dead
|
||||||
|
if self.pregnancy >= 0:
|
||||||
|
return self.pregnant
|
||||||
|
|
||||||
def impregnate(self, male):
|
def impregnate(self, male):
|
||||||
self.info(f'{repr(male)} impregnating female {repr(self)}')
|
self.info(f"impregnated by {repr(male)}")
|
||||||
self.mate = male
|
self.mate = male
|
||||||
self.pregnancy = -1
|
self.pregnancy = 0
|
||||||
self.set_state(self.pregnant, when=self.now)
|
|
||||||
self.number_of_babies = int(8 + 4 * self.random.random())
|
self.number_of_babies = int(8 + 4 * self.random.random())
|
||||||
self.debug('I am pregnant')
|
|
||||||
|
|
||||||
@state
|
@state
|
||||||
def pregnant(self):
|
def pregnant(self):
|
||||||
|
self.info("I am pregnant")
|
||||||
self.age += 1
|
self.age += 1
|
||||||
self.pregnancy += 1
|
|
||||||
|
|
||||||
if self.prob(self.age / self.life_expectancy):
|
if self.age >= self.life_expectancy:
|
||||||
return self.die()
|
return self.die()
|
||||||
|
|
||||||
if self.pregnancy >= self.gestation:
|
if self.pregnancy < self.gestation:
|
||||||
self.info('Having {} babies'.format(self.number_of_babies))
|
self.pregnancy += 1
|
||||||
|
return
|
||||||
|
|
||||||
|
self.info("Having {} babies".format(self.number_of_babies))
|
||||||
for i in range(self.number_of_babies):
|
for i in range(self.number_of_babies):
|
||||||
state = {}
|
state = {}
|
||||||
agent_class = self.random.choice([Male, Female])
|
agent_class = self.random.choice([Male, Female])
|
||||||
child = self.model.add_node(agent_class=agent_class,
|
child = self.model.add_node(agent_class=agent_class, **state)
|
||||||
topology=self.topology,
|
|
||||||
**state)
|
|
||||||
child.add_edge(self)
|
child.add_edge(self)
|
||||||
try:
|
try:
|
||||||
child.add_edge(self.mate)
|
child.add_edge(self.mate)
|
||||||
self.model.agents[self.mate].offspring += 1
|
self.model.agents[self.mate].offspring += 1
|
||||||
except ValueError:
|
except ValueError:
|
||||||
self.debug('The father has passed away')
|
self.debug("The father has passed away")
|
||||||
|
|
||||||
self.offspring += 1
|
self.offspring += 1
|
||||||
self.mate = None
|
self.mate = None
|
||||||
|
self.pregnancy = -1
|
||||||
return self.fertile
|
return self.fertile
|
||||||
|
|
||||||
@state
|
def die(self):
|
||||||
def dead(self):
|
if "pregnancy" in self and self["pregnancy"] > -1:
|
||||||
super().dead()
|
self.info("A mother has died carrying a baby!!")
|
||||||
if 'pregnancy' in self and self['pregnancy'] > -1:
|
return super().die()
|
||||||
self.info('A mother has died carrying a baby!!')
|
|
||||||
|
|
||||||
|
|
||||||
class RandomAccident(BaseAgent):
|
class RandomAccident(BaseAgent):
|
||||||
|
|
||||||
level = logging.INFO
|
|
||||||
|
|
||||||
def step(self):
|
def step(self):
|
||||||
rabbits_alive = self.model.topology.number_of_nodes()
|
rabbits_alive = self.model.G.number_of_nodes()
|
||||||
|
|
||||||
if not rabbits_alive:
|
if not rabbits_alive:
|
||||||
return self.die()
|
return self.die()
|
||||||
|
|
||||||
prob_death = self.model.get('prob_death', 1e-100)*math.floor(math.log10(max(1, rabbits_alive)))
|
prob_death = self.model.get("prob_death", 1e-100) * math.floor(
|
||||||
self.debug('Killing some rabbits with prob={}!'.format(prob_death))
|
math.log10(max(1, rabbits_alive))
|
||||||
for i in self.iter_agents(agent_class=RabbitModel):
|
)
|
||||||
if i.state.id == i.dead.id:
|
self.debug("Killing some rabbits with prob={}!".format(prob_death))
|
||||||
|
for i in self.iter_agents(agent_class=Rabbit):
|
||||||
|
if i.state_id == i.dead.id:
|
||||||
continue
|
continue
|
||||||
if self.prob(prob_death):
|
if self.prob(prob_death):
|
||||||
self.info('I killed a rabbit: {}'.format(i.id))
|
self.info("I killed a rabbit: {}".format(i.id))
|
||||||
rabbits_alive -= 1
|
rabbits_alive -= 1
|
||||||
i.set_state(i.dead)
|
i.die()
|
||||||
self.debug('Rabbits alive: {}'.format(rabbits_alive))
|
self.debug("Rabbits alive: {}".format(rabbits_alive))
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
from soil import easy
|
||||||
|
|
||||||
|
with easy("rabbits.yml") as sim:
|
||||||
|
sim.run()
|
||||||
|
|||||||
@@ -7,21 +7,18 @@ description: null
|
|||||||
group: null
|
group: null
|
||||||
interval: 1.0
|
interval: 1.0
|
||||||
max_time: 100
|
max_time: 100
|
||||||
model_class: soil.environment.Environment
|
model_class: rabbit_agents.RabbitEnv
|
||||||
model_params:
|
model_params:
|
||||||
agents:
|
agents:
|
||||||
topology: default
|
topology: true
|
||||||
agent_class: rabbit_agents.RabbitModel
|
|
||||||
distribution:
|
distribution:
|
||||||
- agent_class: rabbit_agents.Male
|
- agent_class: rabbit_agents.Male
|
||||||
topology: default
|
|
||||||
weight: 1
|
weight: 1
|
||||||
- agent_class: rabbit_agents.Female
|
- agent_class: rabbit_agents.Female
|
||||||
topology: default
|
|
||||||
weight: 1
|
weight: 1
|
||||||
fixed:
|
fixed:
|
||||||
- agent_class: rabbit_agents.RandomAccident
|
- agent_class: rabbit_agents.RandomAccident
|
||||||
topology: null
|
topology: false
|
||||||
hidden: true
|
hidden: true
|
||||||
state:
|
state:
|
||||||
group: environment
|
group: environment
|
||||||
@@ -29,13 +26,17 @@ model_params:
|
|||||||
group: network
|
group: network
|
||||||
mating_prob: 0.1
|
mating_prob: 0.1
|
||||||
prob_death: 0.001
|
prob_death: 0.001
|
||||||
topologies:
|
|
||||||
default:
|
|
||||||
topology:
|
topology:
|
||||||
|
fixed:
|
||||||
directed: true
|
directed: true
|
||||||
links: []
|
links: []
|
||||||
nodes:
|
nodes:
|
||||||
- id: 1
|
- id: 1
|
||||||
- id: 0
|
- id: 0
|
||||||
|
model_reporters:
|
||||||
|
num_males: 'num_males'
|
||||||
|
num_females: 'num_females'
|
||||||
|
num_rabbits: |
|
||||||
|
py:lambda env: env.num_males + env.num_females
|
||||||
extra:
|
extra:
|
||||||
visualization_params: {}
|
visualization_params: {}
|
||||||
|
|||||||
@@ -1,130 +1,157 @@
|
|||||||
from soil.agents import FSM, state, default_state, BaseAgent, NetworkAgent
|
from soil import FSM, state, default_state, BaseAgent, NetworkAgent, Environment
|
||||||
from soil.time import Delta, When, NEVER
|
from soil.time import Delta
|
||||||
from enum import Enum
|
from enum import Enum
|
||||||
|
from collections import Counter
|
||||||
import logging
|
import logging
|
||||||
import math
|
import math
|
||||||
|
|
||||||
|
|
||||||
class RabbitModel(FSM, NetworkAgent):
|
class RabbitEnv(Environment):
|
||||||
|
@property
|
||||||
|
def num_rabbits(self):
|
||||||
|
return self.count_agents(agent_class=Rabbit)
|
||||||
|
|
||||||
mating_prob = 0.005
|
@property
|
||||||
offspring = 0
|
def num_males(self):
|
||||||
|
return self.count_agents(agent_class=Male)
|
||||||
|
|
||||||
|
@property
|
||||||
|
def num_females(self):
|
||||||
|
return self.count_agents(agent_class=Female)
|
||||||
|
|
||||||
|
|
||||||
|
class Rabbit(FSM, NetworkAgent):
|
||||||
|
|
||||||
|
sexual_maturity = 30
|
||||||
|
life_expectancy = 300
|
||||||
birth = None
|
birth = None
|
||||||
|
|
||||||
sexual_maturity = 3
|
@property
|
||||||
life_expectancy = 30
|
def age(self):
|
||||||
|
if self.birth is None:
|
||||||
|
return None
|
||||||
|
return self.now - self.birth
|
||||||
|
|
||||||
@default_state
|
@default_state
|
||||||
@state
|
@state
|
||||||
def newborn(self):
|
def newborn(self):
|
||||||
|
self.info("I am a newborn.")
|
||||||
self.birth = self.now
|
self.birth = self.now
|
||||||
self.info(f'I am a newborn.')
|
self.offspring = 0
|
||||||
self.model['rabbits_alive'] = self.model.get('rabbits_alive', 0) + 1
|
return self.youngling, Delta(self.sexual_maturity - self.age)
|
||||||
|
|
||||||
# Here we can skip the `youngling` state by using a coroutine/generator.
|
@state
|
||||||
while self.age < self.sexual_maturity:
|
def youngling(self):
|
||||||
interval = self.sexual_maturity - self.age
|
if self.age >= self.sexual_maturity:
|
||||||
yield Delta(interval)
|
self.info(f"I am fertile! My age is {self.age}")
|
||||||
|
|
||||||
self.info(f'I am fertile! My age is {self.age}')
|
|
||||||
return self.fertile
|
return self.fertile
|
||||||
|
|
||||||
@property
|
|
||||||
def age(self):
|
|
||||||
return self.now - self.birth
|
|
||||||
|
|
||||||
@state
|
@state
|
||||||
def fertile(self):
|
def fertile(self):
|
||||||
raise Exception("Each subclass should define its fertile state")
|
raise Exception("Each subclass should define its fertile state")
|
||||||
|
|
||||||
def step(self):
|
@state
|
||||||
super().step()
|
def dead(self):
|
||||||
if self.prob(self.age / self.life_expectancy):
|
self.die()
|
||||||
return self.die()
|
|
||||||
|
|
||||||
|
|
||||||
class Male(RabbitModel):
|
class Male(Rabbit):
|
||||||
|
|
||||||
max_females = 5
|
max_females = 5
|
||||||
|
mating_prob = 0.001
|
||||||
|
|
||||||
@state
|
@state
|
||||||
def fertile(self):
|
def fertile(self):
|
||||||
# Males try to mate
|
|
||||||
for f in self.model.agents(agent_class=Female,
|
|
||||||
state_id=Female.fertile.id,
|
|
||||||
limit=self.max_females):
|
|
||||||
self.debug('Found a female:', repr(f))
|
|
||||||
if self.prob(self['mating_prob']):
|
|
||||||
f.impregnate(self)
|
|
||||||
break # Take a break, don't try to impregnate the rest
|
|
||||||
|
|
||||||
|
|
||||||
class Female(RabbitModel):
|
|
||||||
due_date = None
|
|
||||||
age_of_pregnancy = None
|
|
||||||
gestation = 10
|
|
||||||
mate = None
|
|
||||||
|
|
||||||
@state
|
|
||||||
def fertile(self):
|
|
||||||
return self.fertile, NEVER
|
|
||||||
|
|
||||||
@state
|
|
||||||
def pregnant(self):
|
|
||||||
self.info('I am pregnant')
|
|
||||||
if self.age > self.life_expectancy:
|
if self.age > self.life_expectancy:
|
||||||
return self.dead
|
return self.dead
|
||||||
|
|
||||||
self.due_date = self.now + self.gestation
|
# Males try to mate
|
||||||
|
for f in self.model.agents(
|
||||||
|
agent_class=Female, state_id=Female.fertile.id, limit=self.max_females
|
||||||
|
):
|
||||||
|
self.debug("FOUND A FEMALE: ", repr(f), self.mating_prob)
|
||||||
|
if self.prob(self["mating_prob"]):
|
||||||
|
f.impregnate(self)
|
||||||
|
break # Do not try to impregnate other females
|
||||||
|
|
||||||
number_of_babies = int(8+4*self.random.random())
|
|
||||||
|
|
||||||
while self.now < self.due_date:
|
class Female(Rabbit):
|
||||||
yield When(self.due_date)
|
gestation = 10
|
||||||
|
conception = None
|
||||||
self.info('Having {} babies'.format(number_of_babies))
|
|
||||||
for i in range(number_of_babies):
|
|
||||||
agent_class = self.random.choice([Male, Female])
|
|
||||||
child = self.model.add_node(agent_class=agent_class,
|
|
||||||
topology=self.topology)
|
|
||||||
self.model.add_edge(self, child)
|
|
||||||
self.model.add_edge(self.mate, child)
|
|
||||||
self.offspring += 1
|
|
||||||
self.model.agents[self.mate].offspring += 1
|
|
||||||
self.mate = None
|
|
||||||
self.due_date = None
|
|
||||||
return self.fertile
|
|
||||||
|
|
||||||
@state
|
@state
|
||||||
def dead(self):
|
def fertile(self):
|
||||||
super().dead()
|
# Just wait for a Male
|
||||||
if self.due_date is not None:
|
if self.age > self.life_expectancy:
|
||||||
self.info('A mother has died carrying a baby!!')
|
return self.dead
|
||||||
|
if self.conception is not None:
|
||||||
|
return self.pregnant
|
||||||
|
|
||||||
|
@property
|
||||||
|
def pregnancy(self):
|
||||||
|
if self.conception is None:
|
||||||
|
return None
|
||||||
|
return self.now - self.conception
|
||||||
|
|
||||||
def impregnate(self, male):
|
def impregnate(self, male):
|
||||||
self.info(f'{repr(male)} impregnating female {repr(self)}')
|
self.info(f"impregnated by {repr(male)}")
|
||||||
self.mate = male
|
self.mate = male
|
||||||
self.set_state(self.pregnant, when=self.now)
|
self.conception = self.now
|
||||||
|
self.number_of_babies = int(8 + 4 * self.random.random())
|
||||||
|
|
||||||
|
@state
|
||||||
|
def pregnant(self):
|
||||||
|
self.debug("I am pregnant")
|
||||||
|
|
||||||
|
if self.age > self.life_expectancy:
|
||||||
|
self.info("Dying before giving birth")
|
||||||
|
return self.die()
|
||||||
|
|
||||||
|
if self.pregnancy >= self.gestation:
|
||||||
|
self.info("Having {} babies".format(self.number_of_babies))
|
||||||
|
for i in range(self.number_of_babies):
|
||||||
|
state = {}
|
||||||
|
agent_class = self.random.choice([Male, Female])
|
||||||
|
child = self.model.add_node(agent_class=agent_class, **state)
|
||||||
|
child.add_edge(self)
|
||||||
|
if self.mate:
|
||||||
|
child.add_edge(self.mate)
|
||||||
|
self.mate.offspring += 1
|
||||||
|
else:
|
||||||
|
self.debug("The father has passed away")
|
||||||
|
|
||||||
|
self.offspring += 1
|
||||||
|
self.mate = None
|
||||||
|
return self.fertile
|
||||||
|
|
||||||
|
def die(self):
|
||||||
|
if self.pregnancy is not None:
|
||||||
|
self.info("A mother has died carrying a baby!!")
|
||||||
|
return super().die()
|
||||||
|
|
||||||
|
|
||||||
class RandomAccident(BaseAgent):
|
class RandomAccident(BaseAgent):
|
||||||
|
|
||||||
level = logging.INFO
|
|
||||||
|
|
||||||
def step(self):
|
def step(self):
|
||||||
rabbits_total = self.model.topology.number_of_nodes()
|
rabbits_alive = self.model.G.number_of_nodes()
|
||||||
if 'rabbits_alive' not in self.model:
|
|
||||||
self.model['rabbits_alive'] = 0
|
if not rabbits_alive:
|
||||||
rabbits_alive = self.model.get('rabbits_alive', rabbits_total)
|
return self.die()
|
||||||
prob_death = self.model.get('prob_death', 1e-100)*math.floor(math.log10(max(1, rabbits_alive)))
|
|
||||||
self.debug('Killing some rabbits with prob={}!'.format(prob_death))
|
prob_death = self.model.get("prob_death", 1e-100) * math.floor(
|
||||||
for i in self.model.network_agents:
|
math.log10(max(1, rabbits_alive))
|
||||||
if i.state.id == i.dead.id:
|
)
|
||||||
|
self.debug("Killing some rabbits with prob={}!".format(prob_death))
|
||||||
|
for i in self.iter_agents(agent_class=Rabbit):
|
||||||
|
if i.state_id == i.dead.id:
|
||||||
continue
|
continue
|
||||||
if self.prob(prob_death):
|
if self.prob(prob_death):
|
||||||
self.info('I killed a rabbit: {}'.format(i.id))
|
self.info("I killed a rabbit: {}".format(i.id))
|
||||||
rabbits_alive = self.model['rabbits_alive'] = rabbits_alive -1
|
rabbits_alive -= 1
|
||||||
i.set_state(i.dead)
|
i.die()
|
||||||
self.debug('Rabbits alive: {}/{}'.format(rabbits_alive, rabbits_total))
|
self.debug("Rabbits alive: {}".format(rabbits_alive))
|
||||||
if self.model.count_agents(state_id=RabbitModel.dead.id) == self.model.topology.number_of_nodes():
|
|
||||||
self.die()
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
from soil import easy
|
||||||
|
|
||||||
|
with easy("rabbits.yml") as sim:
|
||||||
|
sim.run()
|
||||||
|
|||||||
@@ -7,21 +7,18 @@ description: null
|
|||||||
group: null
|
group: null
|
||||||
interval: 1.0
|
interval: 1.0
|
||||||
max_time: 100
|
max_time: 100
|
||||||
model_class: soil.environment.Environment
|
model_class: rabbit_agents.RabbitEnv
|
||||||
model_params:
|
model_params:
|
||||||
agents:
|
agents:
|
||||||
topology: default
|
topology: true
|
||||||
agent_class: rabbit_agents.RabbitModel
|
|
||||||
distribution:
|
distribution:
|
||||||
- agent_class: rabbit_agents.Male
|
- agent_class: rabbit_agents.Male
|
||||||
topology: default
|
|
||||||
weight: 1
|
weight: 1
|
||||||
- agent_class: rabbit_agents.Female
|
- agent_class: rabbit_agents.Female
|
||||||
topology: default
|
|
||||||
weight: 1
|
weight: 1
|
||||||
fixed:
|
fixed:
|
||||||
- agent_class: rabbit_agents.RandomAccident
|
- agent_class: rabbit_agents.RandomAccident
|
||||||
topology: null
|
topology: false
|
||||||
hidden: true
|
hidden: true
|
||||||
state:
|
state:
|
||||||
group: environment
|
group: environment
|
||||||
@@ -29,13 +26,17 @@ model_params:
|
|||||||
group: network
|
group: network
|
||||||
mating_prob: 0.1
|
mating_prob: 0.1
|
||||||
prob_death: 0.001
|
prob_death: 0.001
|
||||||
topologies:
|
|
||||||
default:
|
|
||||||
topology:
|
topology:
|
||||||
|
fixed:
|
||||||
directed: true
|
directed: true
|
||||||
links: []
|
links: []
|
||||||
nodes:
|
nodes:
|
||||||
- id: 1
|
- id: 1
|
||||||
- id: 0
|
- id: 0
|
||||||
|
model_reporters:
|
||||||
|
num_males: 'num_males'
|
||||||
|
num_females: 'num_females'
|
||||||
|
num_rabbits: |
|
||||||
|
py:lambda env: env.num_males + env.num_females
|
||||||
extra:
|
extra:
|
||||||
visualization_params: {}
|
visualization_params: {}
|
||||||
|
|||||||
@@ -1,29 +1,27 @@
|
|||||||
'''
|
"""
|
||||||
Example of setting a
|
Example of setting a
|
||||||
Example of a fully programmatic simulation, without definition files.
|
Example of a fully programmatic simulation, without definition files.
|
||||||
'''
|
"""
|
||||||
from soil import Simulation, agents
|
from soil import Simulation, agents
|
||||||
from soil.time import Delta
|
from soil.time import Delta
|
||||||
import logging
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
class MyAgent(agents.FSM):
|
class MyAgent(agents.FSM):
|
||||||
'''
|
"""
|
||||||
An agent that first does a ping
|
An agent that first does a ping
|
||||||
'''
|
"""
|
||||||
|
|
||||||
defaults = {'pong_counts': 2}
|
defaults = {"pong_counts": 2}
|
||||||
|
|
||||||
@agents.default_state
|
@agents.default_state
|
||||||
@agents.state
|
@agents.state
|
||||||
def ping(self):
|
def ping(self):
|
||||||
self.info('Ping')
|
self.info("Ping")
|
||||||
return self.pong, Delta(self.random.expovariate(1 / 16))
|
return self.pong, Delta(self.random.expovariate(1 / 16))
|
||||||
|
|
||||||
@agents.state
|
@agents.state
|
||||||
def pong(self):
|
def pong(self):
|
||||||
self.info('Pong')
|
self.info("Pong")
|
||||||
self.pong_counts -= 1
|
self.pong_counts -= 1
|
||||||
self.info(str(self.pong_counts))
|
self.info(str(self.pong_counts))
|
||||||
if self.pong_counts < 1:
|
if self.pong_counts < 1:
|
||||||
@@ -31,14 +29,15 @@ class MyAgent(agents.FSM):
|
|||||||
return None, Delta(self.random.expovariate(1 / 16))
|
return None, Delta(self.random.expovariate(1 / 16))
|
||||||
|
|
||||||
|
|
||||||
s = Simulation(name='Programmatic',
|
s = Simulation(
|
||||||
network_agents=[{'agent_class': MyAgent, 'id': 0}],
|
name="Programmatic",
|
||||||
topology={'nodes': [{'id': 0}], 'links': []},
|
network_agents=[{"agent_class": MyAgent, "id": 0}],
|
||||||
|
topology={"nodes": [{"id": 0}], "links": []},
|
||||||
num_trials=1,
|
num_trials=1,
|
||||||
max_time=100,
|
max_time=100,
|
||||||
agent_class=MyAgent,
|
agent_class=MyAgent,
|
||||||
dry_run=True)
|
dry_run=True,
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
logging.basicConfig(level=logging.INFO)
|
|
||||||
envs = s.run()
|
envs = s.run()
|
||||||
|
|||||||
@@ -20,35 +20,52 @@ class TerroristSpreadModel(FSM, Geo):
|
|||||||
def __init__(self, model=None, unique_id=0, state=()):
|
def __init__(self, model=None, unique_id=0, state=()):
|
||||||
super().__init__(model=model, unique_id=unique_id, state=state)
|
super().__init__(model=model, unique_id=unique_id, state=state)
|
||||||
|
|
||||||
self.information_spread_intensity = model.environment_params['information_spread_intensity']
|
self.information_spread_intensity = model.environment_params[
|
||||||
self.terrorist_additional_influence = model.environment_params['terrorist_additional_influence']
|
"information_spread_intensity"
|
||||||
self.prob_interaction = model.environment_params['prob_interaction']
|
]
|
||||||
|
self.terrorist_additional_influence = model.environment_params[
|
||||||
|
"terrorist_additional_influence"
|
||||||
|
]
|
||||||
|
self.prob_interaction = model.environment_params["prob_interaction"]
|
||||||
|
|
||||||
if self['id'] == self.civilian.id: # Civilian
|
if self["id"] == self.civilian.id: # Civilian
|
||||||
self.mean_belief = self.random.uniform(0.00, 0.5)
|
self.mean_belief = self.random.uniform(0.00, 0.5)
|
||||||
elif self['id'] == self.terrorist.id: # Terrorist
|
elif self["id"] == self.terrorist.id: # Terrorist
|
||||||
self.mean_belief = self.random.uniform(0.8, 1.00)
|
self.mean_belief = self.random.uniform(0.8, 1.00)
|
||||||
elif self['id'] == self.leader.id: # Leader
|
elif self["id"] == self.leader.id: # Leader
|
||||||
self.mean_belief = 1.00
|
self.mean_belief = 1.00
|
||||||
else:
|
else:
|
||||||
raise Exception('Invalid state id: {}'.format(self['id']))
|
raise Exception("Invalid state id: {}".format(self["id"]))
|
||||||
|
|
||||||
if 'min_vulnerability' in model.environment_params:
|
if "min_vulnerability" in model.environment_params:
|
||||||
self.vulnerability = self.random.uniform( model.environment_params['min_vulnerability'], model.environment_params['max_vulnerability'] )
|
self.vulnerability = self.random.uniform(
|
||||||
|
model.environment_params["min_vulnerability"],
|
||||||
|
model.environment_params["max_vulnerability"],
|
||||||
|
)
|
||||||
else:
|
else:
|
||||||
self.vulnerability = self.random.uniform( 0, model.environment_params['max_vulnerability'] )
|
self.vulnerability = self.random.uniform(
|
||||||
|
0, model.environment_params["max_vulnerability"]
|
||||||
|
)
|
||||||
|
|
||||||
@state
|
@state
|
||||||
def civilian(self):
|
def civilian(self):
|
||||||
neighbours = list(self.get_neighboring_agents(agent_class=TerroristSpreadModel))
|
neighbours = list(self.get_neighbors(agent_class=TerroristSpreadModel))
|
||||||
if len(neighbours) > 0:
|
if len(neighbours) > 0:
|
||||||
# Only interact with some of the neighbors
|
# Only interact with some of the neighbors
|
||||||
interactions = list(n for n in neighbours if self.random.random() <= self.prob_interaction)
|
interactions = list(
|
||||||
|
n for n in neighbours if self.random.random() <= self.prob_interaction
|
||||||
|
)
|
||||||
influence = sum(self.degree(i) for i in interactions)
|
influence = sum(self.degree(i) for i in interactions)
|
||||||
mean_belief = sum( i.mean_belief * self.degree(i) / influence for i in interactions )
|
mean_belief = sum(
|
||||||
mean_belief = mean_belief * self.information_spread_intensity + self.mean_belief * ( 1 - self.information_spread_intensity )
|
i.mean_belief * self.degree(i) / influence for i in interactions
|
||||||
self.mean_belief = mean_belief * self.vulnerability + self.mean_belief * ( 1 - self.vulnerability )
|
)
|
||||||
|
mean_belief = (
|
||||||
|
mean_belief * self.information_spread_intensity
|
||||||
|
+ self.mean_belief * (1 - self.information_spread_intensity)
|
||||||
|
)
|
||||||
|
self.mean_belief = mean_belief * self.vulnerability + self.mean_belief * (
|
||||||
|
1 - self.vulnerability
|
||||||
|
)
|
||||||
|
|
||||||
if self.mean_belief >= 0.8:
|
if self.mean_belief >= 0.8:
|
||||||
return self.terrorist
|
return self.terrorist
|
||||||
@@ -56,20 +73,30 @@ class TerroristSpreadModel(FSM, Geo):
|
|||||||
@state
|
@state
|
||||||
def leader(self):
|
def leader(self):
|
||||||
self.mean_belief = self.mean_belief ** (1 - self.terrorist_additional_influence)
|
self.mean_belief = self.mean_belief ** (1 - self.terrorist_additional_influence)
|
||||||
for neighbour in self.get_neighboring_agents(state_id=[self.terrorist.id, self.leader.id]):
|
for neighbour in self.get_neighbors(
|
||||||
|
state_id=[self.terrorist.id, self.leader.id]
|
||||||
|
):
|
||||||
if self.betweenness(neighbour) > self.betweenness(self):
|
if self.betweenness(neighbour) > self.betweenness(self):
|
||||||
return self.terrorist
|
return self.terrorist
|
||||||
|
|
||||||
@state
|
@state
|
||||||
def terrorist(self):
|
def terrorist(self):
|
||||||
neighbours = self.get_agents(state_id=[self.terrorist.id, self.leader.id],
|
neighbours = self.get_agents(
|
||||||
|
state_id=[self.terrorist.id, self.leader.id],
|
||||||
agent_class=TerroristSpreadModel,
|
agent_class=TerroristSpreadModel,
|
||||||
limit_neighbors=True)
|
limit_neighbors=True,
|
||||||
|
)
|
||||||
if len(neighbours) > 0:
|
if len(neighbours) > 0:
|
||||||
influence = sum(self.degree(n) for n in neighbours)
|
influence = sum(self.degree(n) for n in neighbours)
|
||||||
mean_belief = sum( n.mean_belief * self.degree(n) / influence for n in neighbours )
|
mean_belief = sum(
|
||||||
mean_belief = mean_belief * self.vulnerability + self.mean_belief * ( 1 - self.vulnerability )
|
n.mean_belief * self.degree(n) / influence for n in neighbours
|
||||||
self.mean_belief = self.mean_belief ** ( 1 - self.terrorist_additional_influence )
|
)
|
||||||
|
mean_belief = mean_belief * self.vulnerability + self.mean_belief * (
|
||||||
|
1 - self.vulnerability
|
||||||
|
)
|
||||||
|
self.mean_belief = self.mean_belief ** (
|
||||||
|
1 - self.terrorist_additional_influence
|
||||||
|
)
|
||||||
|
|
||||||
# Check if there are any leaders in the group
|
# Check if there are any leaders in the group
|
||||||
leaders = list(filter(lambda x: x.state.id == self.leader.id, neighbours))
|
leaders = list(filter(lambda x: x.state.id == self.leader.id, neighbours))
|
||||||
@@ -82,21 +109,29 @@ class TerroristSpreadModel(FSM, Geo):
|
|||||||
return self.leader
|
return self.leader
|
||||||
|
|
||||||
def ego_search(self, steps=1, center=False, node=None, **kwargs):
|
def ego_search(self, steps=1, center=False, node=None, **kwargs):
|
||||||
'''Get a list of nodes in the ego network of *node* of radius *steps*'''
|
"""Get a list of nodes in the ego network of *node* of radius *steps*"""
|
||||||
node = as_node(node if node is not None else self)
|
node = as_node(node if node is not None else self)
|
||||||
G = self.subgraph(**kwargs)
|
G = self.subgraph(**kwargs)
|
||||||
return nx.ego_graph(G, node, center=center, radius=steps).nodes()
|
return nx.ego_graph(G, node, center=center, radius=steps).nodes()
|
||||||
|
|
||||||
def degree(self, node, force=False):
|
def degree(self, node, force=False):
|
||||||
node = as_node(node)
|
node = as_node(node)
|
||||||
if force or (not hasattr(self.model, '_degree')) or getattr(self.model, '_last_step', 0) < self.now:
|
if (
|
||||||
|
force
|
||||||
|
or (not hasattr(self.model, "_degree"))
|
||||||
|
or getattr(self.model, "_last_step", 0) < self.now
|
||||||
|
):
|
||||||
self.model._degree = nx.degree_centrality(self.G)
|
self.model._degree = nx.degree_centrality(self.G)
|
||||||
self.model._last_step = self.now
|
self.model._last_step = self.now
|
||||||
return self.model._degree[node]
|
return self.model._degree[node]
|
||||||
|
|
||||||
def betweenness(self, node, force=False):
|
def betweenness(self, node, force=False):
|
||||||
node = as_node(node)
|
node = as_node(node)
|
||||||
if force or (not hasattr(self.model, '_betweenness')) or getattr(self.model, '_last_step', 0) < self.now:
|
if (
|
||||||
|
force
|
||||||
|
or (not hasattr(self.model, "_betweenness"))
|
||||||
|
or getattr(self.model, "_last_step", 0) < self.now
|
||||||
|
):
|
||||||
self.model._betweenness = nx.betweenness_centrality(self.G)
|
self.model._betweenness = nx.betweenness_centrality(self.G)
|
||||||
self.model._last_step = self.now
|
self.model._last_step = self.now
|
||||||
return self.model._betweenness[node]
|
return self.model._betweenness[node]
|
||||||
@@ -114,17 +149,20 @@ class TrainingAreaModel(FSM, Geo):
|
|||||||
|
|
||||||
def __init__(self, model=None, unique_id=0, state=()):
|
def __init__(self, model=None, unique_id=0, state=()):
|
||||||
super().__init__(model=model, unique_id=unique_id, state=state)
|
super().__init__(model=model, unique_id=unique_id, state=state)
|
||||||
self.training_influence = model.environment_params['training_influence']
|
self.training_influence = model.environment_params["training_influence"]
|
||||||
if 'min_vulnerability' in model.environment_params:
|
if "min_vulnerability" in model.environment_params:
|
||||||
self.min_vulnerability = model.environment_params['min_vulnerability']
|
self.min_vulnerability = model.environment_params["min_vulnerability"]
|
||||||
else: self.min_vulnerability = 0
|
else:
|
||||||
|
self.min_vulnerability = 0
|
||||||
|
|
||||||
@default_state
|
@default_state
|
||||||
@state
|
@state
|
||||||
def terrorist(self):
|
def terrorist(self):
|
||||||
for neighbour in self.get_neighboring_agents(agent_class=TerroristSpreadModel):
|
for neighbour in self.get_neighbors(agent_class=TerroristSpreadModel):
|
||||||
if neighbour.vulnerability > self.min_vulnerability:
|
if neighbour.vulnerability > self.min_vulnerability:
|
||||||
neighbour.vulnerability = neighbour.vulnerability ** ( 1 - self.training_influence )
|
neighbour.vulnerability = neighbour.vulnerability ** (
|
||||||
|
1 - self.training_influence
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
class HavenModel(FSM, Geo):
|
class HavenModel(FSM, Geo):
|
||||||
@@ -141,14 +179,15 @@ class HavenModel(FSM, Geo):
|
|||||||
|
|
||||||
def __init__(self, model=None, unique_id=0, state=()):
|
def __init__(self, model=None, unique_id=0, state=()):
|
||||||
super().__init__(model=model, unique_id=unique_id, state=state)
|
super().__init__(model=model, unique_id=unique_id, state=state)
|
||||||
self.haven_influence = model.environment_params['haven_influence']
|
self.haven_influence = model.environment_params["haven_influence"]
|
||||||
if 'min_vulnerability' in model.environment_params:
|
if "min_vulnerability" in model.environment_params:
|
||||||
self.min_vulnerability = model.environment_params['min_vulnerability']
|
self.min_vulnerability = model.environment_params["min_vulnerability"]
|
||||||
else: self.min_vulnerability = 0
|
else:
|
||||||
self.max_vulnerability = model.environment_params['max_vulnerability']
|
self.min_vulnerability = 0
|
||||||
|
self.max_vulnerability = model.environment_params["max_vulnerability"]
|
||||||
|
|
||||||
def get_occupants(self, **kwargs):
|
def get_occupants(self, **kwargs):
|
||||||
return self.get_neighboring_agents(agent_class=TerroristSpreadModel, **kwargs)
|
return self.get_neighbors(agent_class=TerroristSpreadModel, **kwargs)
|
||||||
|
|
||||||
@state
|
@state
|
||||||
def civilian(self):
|
def civilian(self):
|
||||||
@@ -158,14 +197,18 @@ class HavenModel(FSM, Geo):
|
|||||||
|
|
||||||
for neighbour in self.get_occupants():
|
for neighbour in self.get_occupants():
|
||||||
if neighbour.vulnerability > self.min_vulnerability:
|
if neighbour.vulnerability > self.min_vulnerability:
|
||||||
neighbour.vulnerability = neighbour.vulnerability * ( 1 - self.haven_influence )
|
neighbour.vulnerability = neighbour.vulnerability * (
|
||||||
|
1 - self.haven_influence
|
||||||
|
)
|
||||||
return self.civilian
|
return self.civilian
|
||||||
|
|
||||||
@state
|
@state
|
||||||
def terrorist(self):
|
def terrorist(self):
|
||||||
for neighbour in self.get_occupants():
|
for neighbour in self.get_occupants():
|
||||||
if neighbour.vulnerability < self.max_vulnerability:
|
if neighbour.vulnerability < self.max_vulnerability:
|
||||||
neighbour.vulnerability = neighbour.vulnerability ** ( 1 - self.haven_influence )
|
neighbour.vulnerability = neighbour.vulnerability ** (
|
||||||
|
1 - self.haven_influence
|
||||||
|
)
|
||||||
return self.terrorist
|
return self.terrorist
|
||||||
|
|
||||||
|
|
||||||
@@ -184,10 +227,10 @@ class TerroristNetworkModel(TerroristSpreadModel):
|
|||||||
def __init__(self, model=None, unique_id=0, state=()):
|
def __init__(self, model=None, unique_id=0, state=()):
|
||||||
super().__init__(model=model, unique_id=unique_id, state=state)
|
super().__init__(model=model, unique_id=unique_id, state=state)
|
||||||
|
|
||||||
self.vision_range = model.environment_params['vision_range']
|
self.vision_range = model.environment_params["vision_range"]
|
||||||
self.sphere_influence = model.environment_params['sphere_influence']
|
self.sphere_influence = model.environment_params["sphere_influence"]
|
||||||
self.weight_social_distance = model.environment_params['weight_social_distance']
|
self.weight_social_distance = model.environment_params["weight_social_distance"]
|
||||||
self.weight_link_distance = model.environment_params['weight_link_distance']
|
self.weight_link_distance = model.environment_params["weight_link_distance"]
|
||||||
|
|
||||||
@state
|
@state
|
||||||
def terrorist(self):
|
def terrorist(self):
|
||||||
@@ -200,22 +243,43 @@ class TerroristNetworkModel(TerroristSpreadModel):
|
|||||||
return super().leader()
|
return super().leader()
|
||||||
|
|
||||||
def update_relationships(self):
|
def update_relationships(self):
|
||||||
if self.count_neighboring_agents(state_id=self.civilian.id) == 0:
|
if self.count_neighbors(state_id=self.civilian.id) == 0:
|
||||||
close_ups = set(self.geo_search(radius=self.vision_range, agent_class=TerroristNetworkModel))
|
close_ups = set(
|
||||||
step_neighbours = set(self.ego_search(self.sphere_influence, agent_class=TerroristNetworkModel, center=False))
|
self.geo_search(
|
||||||
neighbours = set(agent.id for agent in self.get_neighboring_agents(agent_class=TerroristNetworkModel))
|
radius=self.vision_range, agent_class=TerroristNetworkModel
|
||||||
|
)
|
||||||
|
)
|
||||||
|
step_neighbours = set(
|
||||||
|
self.ego_search(
|
||||||
|
self.sphere_influence,
|
||||||
|
agent_class=TerroristNetworkModel,
|
||||||
|
center=False,
|
||||||
|
)
|
||||||
|
)
|
||||||
|
neighbours = set(
|
||||||
|
agent.id
|
||||||
|
for agent in self.get_neighbors(
|
||||||
|
agent_class=TerroristNetworkModel
|
||||||
|
)
|
||||||
|
)
|
||||||
search = (close_ups | step_neighbours) - neighbours
|
search = (close_ups | step_neighbours) - neighbours
|
||||||
for agent in self.get_agents(search):
|
for agent in self.get_agents(search):
|
||||||
social_distance = 1 / self.shortest_path_length(agent.id)
|
social_distance = 1 / self.shortest_path_length(agent.id)
|
||||||
spatial_proximity = ( 1 - self.get_distance(agent.id) )
|
spatial_proximity = 1 - self.get_distance(agent.id)
|
||||||
prob_new_interaction = self.weight_social_distance * social_distance + self.weight_link_distance * spatial_proximity
|
prob_new_interaction = (
|
||||||
if agent['id'] == agent.civilian.id and self.random.random() < prob_new_interaction:
|
self.weight_social_distance * social_distance
|
||||||
|
+ self.weight_link_distance * spatial_proximity
|
||||||
|
)
|
||||||
|
if (
|
||||||
|
agent["id"] == agent.civilian.id
|
||||||
|
and self.random.random() < prob_new_interaction
|
||||||
|
):
|
||||||
self.add_edge(agent)
|
self.add_edge(agent)
|
||||||
break
|
break
|
||||||
|
|
||||||
def get_distance(self, target):
|
def get_distance(self, target):
|
||||||
source_x, source_y = nx.get_node_attributes(self.G, 'pos')[self.id]
|
source_x, source_y = nx.get_node_attributes(self.G, "pos")[self.id]
|
||||||
target_x, target_y = nx.get_node_attributes(self.G, 'pos')[target]
|
target_x, target_y = nx.get_node_attributes(self.G, "pos")[target]
|
||||||
dx = abs(source_x - target_x)
|
dx = abs(source_x - target_x)
|
||||||
dy = abs(source_y - target_y)
|
dy = abs(source_y - target_y)
|
||||||
return (dx**2 + dy**2) ** (1 / 2)
|
return (dx**2 + dy**2) ** (1 / 2)
|
||||||
@@ -224,4 +288,4 @@ class TerroristNetworkModel(TerroristSpreadModel):
|
|||||||
try:
|
try:
|
||||||
return nx.shortest_path_length(self.G, self.id, target)
|
return nx.shortest_path_length(self.G, self.id, target)
|
||||||
except nx.NetworkXNoPath:
|
except nx.NetworkXNoPath:
|
||||||
return float('inf')
|
return float("inf")
|
||||||
|
|||||||
@@ -5,6 +5,6 @@ pyyaml>=5.1
|
|||||||
pandas>=1
|
pandas>=1
|
||||||
SALib>=1.3
|
SALib>=1.3
|
||||||
Jinja2
|
Jinja2
|
||||||
Mesa>=1
|
Mesa>=1.1
|
||||||
pydantic>=1.9
|
pydantic>=1.9
|
||||||
sqlalchemy>=1.4
|
sqlalchemy>=1.4
|
||||||
|
|||||||
2
setup.py
2
setup.py
@@ -53,6 +53,6 @@ setup(
|
|||||||
include_package_data=True,
|
include_package_data=True,
|
||||||
entry_points={
|
entry_points={
|
||||||
'console_scripts':
|
'console_scripts':
|
||||||
['soil = soil.__init__:main',
|
['soil = soil.__main__:main',
|
||||||
'soil-web = soil.web.__init__:main']
|
'soil-web = soil.web.__init__:main']
|
||||||
})
|
})
|
||||||
|
|||||||
@@ -1 +1 @@
|
|||||||
0.20.7
|
0.30.0rc2
|
||||||
233
soil/__init__.py
233
soil/__init__.py
@@ -5,6 +5,7 @@ import sys
|
|||||||
import os
|
import os
|
||||||
import logging
|
import logging
|
||||||
import traceback
|
import traceback
|
||||||
|
from contextlib import contextmanager
|
||||||
|
|
||||||
from .version import __version__
|
from .version import __version__
|
||||||
|
|
||||||
@@ -16,98 +17,185 @@ except NameError:
|
|||||||
from .agents import *
|
from .agents import *
|
||||||
from . import agents
|
from . import agents
|
||||||
from .simulation import *
|
from .simulation import *
|
||||||
from .environment import Environment
|
from .environment import Environment, EventedEnvironment
|
||||||
from . import serialization
|
from . import serialization
|
||||||
from .utils import logger
|
from .utils import logger
|
||||||
from .time import *
|
from .time import *
|
||||||
|
|
||||||
def main(cfg='simulation.yml', **kwargs):
|
|
||||||
|
def main(
|
||||||
|
cfg="simulation.yml",
|
||||||
|
exporters=None,
|
||||||
|
parallel=None,
|
||||||
|
output="soil_output",
|
||||||
|
*,
|
||||||
|
do_run=False,
|
||||||
|
debug=False,
|
||||||
|
pdb=False,
|
||||||
|
**kwargs,
|
||||||
|
):
|
||||||
|
|
||||||
|
if isinstance(cfg, Simulation):
|
||||||
|
sim = cfg
|
||||||
import argparse
|
import argparse
|
||||||
from . import simulation
|
from . import simulation
|
||||||
|
|
||||||
logger.info('Running SOIL version: {}'.format(__version__))
|
logger.info("Running SOIL version: {}".format(__version__))
|
||||||
|
|
||||||
parser = argparse.ArgumentParser(description='Run a SOIL simulation')
|
parser = argparse.ArgumentParser(description="Run a SOIL simulation")
|
||||||
parser.add_argument('file', type=str,
|
parser.add_argument(
|
||||||
|
"file",
|
||||||
|
type=str,
|
||||||
nargs="?",
|
nargs="?",
|
||||||
default=cfg,
|
default=cfg if sim is None else '',
|
||||||
help='Configuration file for the simulation (e.g., YAML or JSON)')
|
help="Configuration file for the simulation (e.g., YAML or JSON)",
|
||||||
parser.add_argument('--version', action='store_true',
|
)
|
||||||
help='Show version info and exit')
|
parser.add_argument(
|
||||||
parser.add_argument('--module', '-m', type=str,
|
"--version", action="store_true", help="Show version info and exit"
|
||||||
help='file containing the code of any custom agents.')
|
)
|
||||||
parser.add_argument('--dry-run', '--dry', action='store_true',
|
parser.add_argument(
|
||||||
help='Do not store the results of the simulation to disk, show in terminal instead.')
|
"--module",
|
||||||
parser.add_argument('--pdb', action='store_true',
|
"-m",
|
||||||
help='Use a pdb console in case of exception.')
|
type=str,
|
||||||
parser.add_argument('--debug', action='store_true',
|
help="file containing the code of any custom agents.",
|
||||||
help='Run a customized version of a pdb console to debug a simulation.')
|
)
|
||||||
parser.add_argument('--graph', '-g', action='store_true',
|
parser.add_argument(
|
||||||
help='Dump each trial\'s network topology as a GEXF graph. Defaults to false.')
|
"--dry-run",
|
||||||
parser.add_argument('--csv', action='store_true',
|
"--dry",
|
||||||
help='Dump all data collected in CSV format. Defaults to false.')
|
action="store_true",
|
||||||
parser.add_argument('--level', type=str,
|
help="Do not store the results of the simulation to disk, show in terminal instead.",
|
||||||
help='Logging level')
|
)
|
||||||
parser.add_argument('--output', '-o', type=str, default="soil_output",
|
parser.add_argument(
|
||||||
help='folder to write results to. It defaults to the current directory.')
|
"--pdb", action="store_true", help="Use a pdb console in case of exception."
|
||||||
parser.add_argument('--synchronous', action='store_true',
|
)
|
||||||
help='Run trials serially and synchronously instead of in parallel. Defaults to false.')
|
parser.add_argument(
|
||||||
parser.add_argument('-e', '--exporter', action='append',
|
"--debug",
|
||||||
help='Export environment and/or simulations using this exporter')
|
action="store_true",
|
||||||
parser.add_argument('--only-convert', '--convert', action='store_true',
|
help="Run a customized version of a pdb console to debug a simulation.",
|
||||||
help='Do not run the simulation, only convert the configuration file(s) and output them.')
|
)
|
||||||
|
parser.add_argument(
|
||||||
|
"--graph",
|
||||||
|
"-g",
|
||||||
|
action="store_true",
|
||||||
|
help="Dump each trial's network topology as a GEXF graph. Defaults to false.",
|
||||||
|
)
|
||||||
|
parser.add_argument(
|
||||||
|
"--csv",
|
||||||
|
action="store_true",
|
||||||
|
help="Dump all data collected in CSV format. Defaults to false.",
|
||||||
|
)
|
||||||
|
parser.add_argument("--level", type=str, help="Logging level")
|
||||||
|
parser.add_argument(
|
||||||
|
"--output",
|
||||||
|
"-o",
|
||||||
|
type=str,
|
||||||
|
default=output or "soil_output",
|
||||||
|
help="folder to write results to. It defaults to the current directory.",
|
||||||
|
)
|
||||||
|
if parallel is None:
|
||||||
|
parser.add_argument(
|
||||||
|
"--synchronous",
|
||||||
|
action="store_true",
|
||||||
|
help="Run trials serially and synchronously instead of in parallel. Defaults to false.",
|
||||||
|
)
|
||||||
|
|
||||||
|
parser.add_argument(
|
||||||
|
"-e",
|
||||||
|
"--exporter",
|
||||||
|
action="append",
|
||||||
|
default=[],
|
||||||
|
help="Export environment and/or simulations using this exporter",
|
||||||
|
)
|
||||||
|
|
||||||
parser.add_argument("--set",
|
parser.add_argument(
|
||||||
|
"--only-convert",
|
||||||
|
"--convert",
|
||||||
|
action="store_true",
|
||||||
|
help="Do not run the simulation, only convert the configuration file(s) and output them.",
|
||||||
|
)
|
||||||
|
|
||||||
|
parser.add_argument(
|
||||||
|
"--set",
|
||||||
metavar="KEY=VALUE",
|
metavar="KEY=VALUE",
|
||||||
action='append',
|
action="append",
|
||||||
help="Set a number of parameters that will be passed to the simulation."
|
help="Set a number of parameters that will be passed to the simulation."
|
||||||
"(do not put spaces before or after the = sign). "
|
"(do not put spaces before or after the = sign). "
|
||||||
"If a value contains spaces, you should define "
|
"If a value contains spaces, you should define "
|
||||||
"it with double quotes: "
|
"it with double quotes: "
|
||||||
'foo="this is a sentence". Note that '
|
'foo="this is a sentence". Note that '
|
||||||
"values are always treated as strings.")
|
"values are always treated as strings.",
|
||||||
|
)
|
||||||
|
|
||||||
args = parser.parse_args()
|
args = parser.parse_args()
|
||||||
logger.setLevel(getattr(logging, (args.level or 'INFO').upper()))
|
logger.setLevel(getattr(logging, (args.level or "INFO").upper()))
|
||||||
|
|
||||||
if args.version:
|
if args.version:
|
||||||
return
|
return
|
||||||
|
|
||||||
|
if parallel is None:
|
||||||
|
parallel = not args.synchronous
|
||||||
|
|
||||||
|
exporters = exporters or [
|
||||||
|
"default",
|
||||||
|
]
|
||||||
|
for exp in args.exporter:
|
||||||
|
if exp not in exporters:
|
||||||
|
exporters.append(exp)
|
||||||
|
if args.csv:
|
||||||
|
exporters.append("csv")
|
||||||
|
if args.graph:
|
||||||
|
exporters.append("gexf")
|
||||||
|
|
||||||
if os.getcwd() not in sys.path:
|
if os.getcwd() not in sys.path:
|
||||||
sys.path.append(os.getcwd())
|
sys.path.append(os.getcwd())
|
||||||
if args.module:
|
if args.module:
|
||||||
importlib.import_module(args.module)
|
importlib.import_module(args.module)
|
||||||
|
if output is None:
|
||||||
|
output = args.output
|
||||||
|
|
||||||
logger.info('Loading config file: {}'.format(args.file))
|
debug = debug or args.debug
|
||||||
|
|
||||||
if args.pdb or args.debug:
|
if args.pdb or debug:
|
||||||
args.synchronous = True
|
args.synchronous = True
|
||||||
if args.debug:
|
os.environ["SOIL_POSTMORTEM"] = "true"
|
||||||
os.environ['SOIL_DEBUG'] = 'true'
|
|
||||||
|
|
||||||
|
res = []
|
||||||
try:
|
try:
|
||||||
exporters = list(args.exporter or ['default', ])
|
|
||||||
if args.csv:
|
|
||||||
exporters.append('csv')
|
|
||||||
if args.graph:
|
|
||||||
exporters.append('gexf')
|
|
||||||
exp_params = {}
|
exp_params = {}
|
||||||
if args.dry_run:
|
|
||||||
exp_params['copy_to'] = sys.stdout
|
|
||||||
|
|
||||||
|
if sim:
|
||||||
|
logger.info("Loading simulation instance")
|
||||||
|
sim.dry_run = args.dry_run
|
||||||
|
sim.exporters = exporters
|
||||||
|
sim.parallel = parallel
|
||||||
|
sim.outdir = output
|
||||||
|
sims = [sim, ]
|
||||||
|
else:
|
||||||
|
logger.info("Loading config file: {}".format(args.file))
|
||||||
if not os.path.exists(args.file):
|
if not os.path.exists(args.file):
|
||||||
logger.error('Please, input a valid file')
|
logger.error("Please, input a valid file")
|
||||||
return
|
return
|
||||||
for sim in simulation.iter_from_config(args.file):
|
|
||||||
|
sims = list(simulation.iter_from_config(
|
||||||
|
args.file,
|
||||||
|
dry_run=args.dry_run,
|
||||||
|
exporters=exporters,
|
||||||
|
parallel=parallel,
|
||||||
|
outdir=output,
|
||||||
|
exporter_params=exp_params,
|
||||||
|
**kwargs,
|
||||||
|
))
|
||||||
|
|
||||||
|
for sim in sims:
|
||||||
|
|
||||||
if args.set:
|
if args.set:
|
||||||
for s in args.set:
|
for s in args.set:
|
||||||
k, v = s.split('=', 1)[:2]
|
k, v = s.split("=", 1)[:2]
|
||||||
v = eval(v)
|
v = eval(v)
|
||||||
tail, *head = k.rsplit('.', 1)[::-1]
|
tail, *head = k.rsplit(".", 1)[::-1]
|
||||||
target = sim
|
target = sim
|
||||||
if head:
|
if head:
|
||||||
for part in head[0].split('.'):
|
for part in head[0].split("."):
|
||||||
try:
|
try:
|
||||||
target = getattr(target, part)
|
target = getattr(target, part)
|
||||||
except AttributeError:
|
except AttributeError:
|
||||||
@@ -120,27 +208,40 @@ def main(cfg='simulation.yml', **kwargs):
|
|||||||
if args.only_convert:
|
if args.only_convert:
|
||||||
print(sim.to_yaml())
|
print(sim.to_yaml())
|
||||||
continue
|
continue
|
||||||
|
if do_run:
|
||||||
sim.run_simulation(dry_run=args.dry_run,
|
res.append(sim.run())
|
||||||
exporters=exporters,
|
else:
|
||||||
parallel=(not args.synchronous),
|
print("not running")
|
||||||
outdir=args.output,
|
res.append(sim)
|
||||||
exporter_params=exp_params,
|
|
||||||
**kwargs)
|
|
||||||
|
|
||||||
except Exception as ex:
|
except Exception as ex:
|
||||||
if args.pdb:
|
if args.pdb:
|
||||||
from .debugging import post_mortem
|
from .debugging import post_mortem
|
||||||
|
|
||||||
print(traceback.format_exc())
|
print(traceback.format_exc())
|
||||||
post_mortem()
|
post_mortem()
|
||||||
else:
|
else:
|
||||||
raise
|
raise
|
||||||
|
if debug:
|
||||||
|
from .debugging import set_trace
|
||||||
|
|
||||||
def easy(cfg, debug=False):
|
os.environ["SOIL_DEBUG"] = "true"
|
||||||
sim = simulation.from_config(cfg)
|
set_trace()
|
||||||
if debug or os.environ.get('SOIL_DEBUG'):
|
return res
|
||||||
from .debugging import setup
|
|
||||||
setup(sys._getframe().f_back)
|
|
||||||
return sim
|
@contextmanager
|
||||||
if __name__ == '__main__':
|
def easy(cfg, pdb=False, debug=False, **kwargs):
|
||||||
main()
|
try:
|
||||||
|
yield main(cfg, debug=debug, pdb=pdb, **kwargs)[0]
|
||||||
|
except Exception as e:
|
||||||
|
if os.environ.get("SOIL_POSTMORTEM"):
|
||||||
|
from .debugging import post_mortem
|
||||||
|
|
||||||
|
print(traceback.format_exc())
|
||||||
|
post_mortem()
|
||||||
|
raise
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
main(do_run=True)
|
||||||
|
|||||||
@@ -1,4 +1,9 @@
|
|||||||
from . import main
|
from . import main as init_main
|
||||||
|
|
||||||
if __name__ == '__main__':
|
|
||||||
main()
|
def main():
|
||||||
|
init_main(do_run=True)
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
init_main(do_run=True)
|
||||||
|
|||||||
@@ -7,6 +7,7 @@ class BassModel(FSM):
|
|||||||
innovation_prob
|
innovation_prob
|
||||||
imitation_prob
|
imitation_prob
|
||||||
"""
|
"""
|
||||||
|
|
||||||
sentimentCorrelation = 0
|
sentimentCorrelation = 0
|
||||||
|
|
||||||
def step(self):
|
def step(self):
|
||||||
@@ -19,9 +20,9 @@ class BassModel(FSM):
|
|||||||
self.sentimentCorrelation = 1
|
self.sentimentCorrelation = 1
|
||||||
return self.aware
|
return self.aware
|
||||||
else:
|
else:
|
||||||
aware_neighbors = self.get_neighboring_agents(state_id=self.aware.id)
|
aware_neighbors = self.get_neighbors(state_id=self.aware.id)
|
||||||
num_neighbors_aware = len(aware_neighbors)
|
num_neighbors_aware = len(aware_neighbors)
|
||||||
if self.prob((self['imitation_prob']*num_neighbors_aware)):
|
if self.prob((self["imitation_prob"] * num_neighbors_aware)):
|
||||||
self.sentimentCorrelation = 1
|
self.sentimentCorrelation = 1
|
||||||
return self.aware
|
return self.aware
|
||||||
|
|
||||||
|
|||||||
@@ -20,26 +20,38 @@ class BigMarketModel(FSM):
|
|||||||
|
|
||||||
def __init__(self, *args, **kwargs):
|
def __init__(self, *args, **kwargs):
|
||||||
super().__init__(*args, **kwargs)
|
super().__init__(*args, **kwargs)
|
||||||
self.enterprises = self.env.environment_params['enterprises']
|
self.enterprises = self.env.environment_params["enterprises"]
|
||||||
self.type = ""
|
self.type = ""
|
||||||
|
|
||||||
if self.id < len(self.enterprises): # Enterprises
|
if self.id < len(self.enterprises): # Enterprises
|
||||||
self.set_state(self.enterprise.id)
|
self._set_state(self.enterprise.id)
|
||||||
self.type = "Enterprise"
|
self.type = "Enterprise"
|
||||||
self.tweet_probability = environment.environment_params['tweet_probability_enterprises'][self.id]
|
self.tweet_probability = environment.environment_params[
|
||||||
|
"tweet_probability_enterprises"
|
||||||
|
][self.id]
|
||||||
else: # normal users
|
else: # normal users
|
||||||
self.type = "User"
|
self.type = "User"
|
||||||
self.set_state(self.user.id)
|
self._set_state(self.user.id)
|
||||||
self.tweet_probability = environment.environment_params['tweet_probability_users']
|
self.tweet_probability = environment.environment_params[
|
||||||
self.tweet_relevant_probability = environment.environment_params['tweet_relevant_probability']
|
"tweet_probability_users"
|
||||||
self.tweet_probability_about = environment.environment_params['tweet_probability_about'] # List
|
]
|
||||||
self.sentiment_about = environment.environment_params['sentiment_about'] # List
|
self.tweet_relevant_probability = environment.environment_params[
|
||||||
|
"tweet_relevant_probability"
|
||||||
|
]
|
||||||
|
self.tweet_probability_about = environment.environment_params[
|
||||||
|
"tweet_probability_about"
|
||||||
|
] # List
|
||||||
|
self.sentiment_about = environment.environment_params[
|
||||||
|
"sentiment_about"
|
||||||
|
] # List
|
||||||
|
|
||||||
@state
|
@state
|
||||||
def enterprise(self):
|
def enterprise(self):
|
||||||
|
|
||||||
if self.random.random() < self.tweet_probability: # Tweets
|
if self.random.random() < self.tweet_probability: # Tweets
|
||||||
aware_neighbors = self.get_neighboring_agents(state_id=self.number_of_enterprises) # Nodes neighbour users
|
aware_neighbors = self.get_neighbors(
|
||||||
|
state_id=self.number_of_enterprises
|
||||||
|
) # Nodes neighbour users
|
||||||
for x in aware_neighbors:
|
for x in aware_neighbors:
|
||||||
if self.random.uniform(0, 10) < 5:
|
if self.random.uniform(0, 10) < 5:
|
||||||
x.sentiment_about[self.id] += 0.1 # Increments for enterprise
|
x.sentiment_about[self.id] += 0.1 # Increments for enterprise
|
||||||
@@ -52,12 +64,16 @@ class BigMarketModel(FSM):
|
|||||||
if x.sentiment_about[self.id] < -1:
|
if x.sentiment_about[self.id] < -1:
|
||||||
x.sentiment_about[self.id] = -1
|
x.sentiment_about[self.id] = -1
|
||||||
|
|
||||||
x.attrs['sentiment_enterprise_%s'% self.enterprises[self.id]] = x.sentiment_about[self.id]
|
x.attrs[
|
||||||
|
"sentiment_enterprise_%s" % self.enterprises[self.id]
|
||||||
|
] = x.sentiment_about[self.id]
|
||||||
|
|
||||||
@state
|
@state
|
||||||
def user(self):
|
def user(self):
|
||||||
if self.random.random() < self.tweet_probability: # Tweets
|
if self.random.random() < self.tweet_probability: # Tweets
|
||||||
if self.random.random() < self.tweet_relevant_probability: # Tweets something relevant
|
if (
|
||||||
|
self.random.random() < self.tweet_relevant_probability
|
||||||
|
): # Tweets something relevant
|
||||||
# Tweet probability per enterprise
|
# Tweet probability per enterprise
|
||||||
for i in range(len(self.enterprises)):
|
for i in range(len(self.enterprises)):
|
||||||
random_num = self.random.random()
|
random_num = self.random.random()
|
||||||
@@ -72,11 +88,17 @@ class BigMarketModel(FSM):
|
|||||||
else:
|
else:
|
||||||
# POSITIVO
|
# POSITIVO
|
||||||
self.userTweets("positive", i)
|
self.userTweets("positive", i)
|
||||||
for i in range(len(self.enterprises)): # So that it never is set to 0 if there are not changes (logs)
|
for i in range(
|
||||||
self.attrs['sentiment_enterprise_%s'% self.enterprises[i]] = self.sentiment_about[i]
|
len(self.enterprises)
|
||||||
|
): # So that it never is set to 0 if there are not changes (logs)
|
||||||
|
self.attrs[
|
||||||
|
"sentiment_enterprise_%s" % self.enterprises[i]
|
||||||
|
] = self.sentiment_about[i]
|
||||||
|
|
||||||
def userTweets(self, sentiment, enterprise):
|
def userTweets(self, sentiment, enterprise):
|
||||||
aware_neighbors = self.get_neighboring_agents(state_id=self.number_of_enterprises) # Nodes neighbours users
|
aware_neighbors = self.get_neighbors(
|
||||||
|
state_id=self.number_of_enterprises
|
||||||
|
) # Nodes neighbours users
|
||||||
for x in aware_neighbors:
|
for x in aware_neighbors:
|
||||||
if sentiment == "positive":
|
if sentiment == "positive":
|
||||||
x.sentiment_about[enterprise] += 0.003
|
x.sentiment_about[enterprise] += 0.003
|
||||||
@@ -91,4 +113,6 @@ class BigMarketModel(FSM):
|
|||||||
if x.sentiment_about[enterprise] < -1:
|
if x.sentiment_about[enterprise] < -1:
|
||||||
x.sentiment_about[enterprise] = -1
|
x.sentiment_about[enterprise] = -1
|
||||||
|
|
||||||
x.attrs['sentiment_enterprise_%s'% self.enterprises[enterprise]] = x.sentiment_about[enterprise]
|
x.attrs[
|
||||||
|
"sentiment_enterprise_%s" % self.enterprises[enterprise]
|
||||||
|
] = x.sentiment_about[enterprise]
|
||||||
|
|||||||
@@ -14,10 +14,10 @@ class CounterModel(NetworkAgent):
|
|||||||
def step(self):
|
def step(self):
|
||||||
# Outside effects
|
# Outside effects
|
||||||
total = len(list(self.model.schedule._agents))
|
total = len(list(self.model.schedule._agents))
|
||||||
neighbors = len(list(self.get_neighboring_agents()))
|
neighbors = len(list(self.get_neighbors()))
|
||||||
self['times'] = self.get('times', 0) + 1
|
self["times"] = self.get("times", 0) + 1
|
||||||
self['neighbors'] = neighbors
|
self["neighbors"] = neighbors
|
||||||
self['total'] = total
|
self["total"] = total
|
||||||
|
|
||||||
|
|
||||||
class AggregatedCounter(NetworkAgent):
|
class AggregatedCounter(NetworkAgent):
|
||||||
@@ -32,9 +32,9 @@ class AggregatedCounter(NetworkAgent):
|
|||||||
|
|
||||||
def step(self):
|
def step(self):
|
||||||
# Outside effects
|
# Outside effects
|
||||||
self['times'] += 1
|
self["times"] += 1
|
||||||
neighbors = len(list(self.get_neighboring_agents()))
|
neighbors = len(list(self.get_neighbors()))
|
||||||
self['neighbors'] += neighbors
|
self["neighbors"] += neighbors
|
||||||
total = len(list(self.model.schedule.agents))
|
total = len(list(self.model.schedule.agents))
|
||||||
self['total'] += total
|
self["total"] += total
|
||||||
self.debug('Running for step: {}. Total: {}'.format(self.now, total))
|
self.debug("Running for step: {}. Total: {}".format(self.now, total))
|
||||||
|
|||||||
@@ -2,20 +2,20 @@ from scipy.spatial import cKDTree as KDTree
|
|||||||
import networkx as nx
|
import networkx as nx
|
||||||
from . import NetworkAgent, as_node
|
from . import NetworkAgent, as_node
|
||||||
|
|
||||||
|
|
||||||
class Geo(NetworkAgent):
|
class Geo(NetworkAgent):
|
||||||
'''In this type of network, nodes have a "pos" attribute.'''
|
"""In this type of network, nodes have a "pos" attribute."""
|
||||||
|
|
||||||
def geo_search(self, radius, node=None, center=False, **kwargs):
|
def geo_search(self, radius, node=None, center=False, **kwargs):
|
||||||
'''Get a list of nodes whose coordinates are closer than *radius* to *node*.'''
|
"""Get a list of nodes whose coordinates are closer than *radius* to *node*."""
|
||||||
node = as_node(node if node is not None else self)
|
node = as_node(node if node is not None else self)
|
||||||
|
|
||||||
G = self.subgraph(**kwargs)
|
G = self.subgraph(**kwargs)
|
||||||
|
|
||||||
pos = nx.get_node_attributes(G, 'pos')
|
pos = nx.get_node_attributes(G, "pos")
|
||||||
if not pos:
|
if not pos:
|
||||||
return []
|
return []
|
||||||
nodes, coords = list(zip(*pos.items()))
|
nodes, coords = list(zip(*pos.items()))
|
||||||
kdtree = KDTree(coords) # Cannot provide generator.
|
kdtree = KDTree(coords) # Cannot provide generator.
|
||||||
indices = kdtree.query_ball_point(pos[node], radius)
|
indices = kdtree.query_ball_point(pos[node], radius)
|
||||||
return [nodes[i] for i in indices if center or (nodes[i] != node)]
|
return [nodes[i] for i in indices if center or (nodes[i] != node)]
|
||||||
|
|
||||||
|
|||||||
@@ -11,10 +11,10 @@ class IndependentCascadeModel(BaseAgent):
|
|||||||
|
|
||||||
def __init__(self, *args, **kwargs):
|
def __init__(self, *args, **kwargs):
|
||||||
super().__init__(*args, **kwargs)
|
super().__init__(*args, **kwargs)
|
||||||
self.innovation_prob = self.env.environment_params['innovation_prob']
|
self.innovation_prob = self.env.environment_params["innovation_prob"]
|
||||||
self.imitation_prob = self.env.environment_params['imitation_prob']
|
self.imitation_prob = self.env.environment_params["imitation_prob"]
|
||||||
self.state['time_awareness'] = 0
|
self.state["time_awareness"] = 0
|
||||||
self.state['sentimentCorrelation'] = 0
|
self.state["sentimentCorrelation"] = 0
|
||||||
|
|
||||||
def step(self):
|
def step(self):
|
||||||
self.behaviour()
|
self.behaviour()
|
||||||
@@ -23,25 +23,27 @@ class IndependentCascadeModel(BaseAgent):
|
|||||||
aware_neighbors_1_time_step = []
|
aware_neighbors_1_time_step = []
|
||||||
# Outside effects
|
# Outside effects
|
||||||
if self.prob(self.innovation_prob):
|
if self.prob(self.innovation_prob):
|
||||||
if self.state['id'] == 0:
|
if self.state["id"] == 0:
|
||||||
self.state['id'] = 1
|
self.state["id"] = 1
|
||||||
self.state['sentimentCorrelation'] = 1
|
self.state["sentimentCorrelation"] = 1
|
||||||
self.state['time_awareness'] = self.env.now # To know when they have been infected
|
self.state[
|
||||||
|
"time_awareness"
|
||||||
|
] = self.env.now # To know when they have been infected
|
||||||
else:
|
else:
|
||||||
pass
|
pass
|
||||||
|
|
||||||
return
|
return
|
||||||
|
|
||||||
# Imitation effects
|
# Imitation effects
|
||||||
if self.state['id'] == 0:
|
if self.state["id"] == 0:
|
||||||
aware_neighbors = self.get_neighboring_agents(state_id=1)
|
aware_neighbors = self.get_neighbors(state_id=1)
|
||||||
for x in aware_neighbors:
|
for x in aware_neighbors:
|
||||||
if x.state['time_awareness'] == (self.env.now-1):
|
if x.state["time_awareness"] == (self.env.now - 1):
|
||||||
aware_neighbors_1_time_step.append(x)
|
aware_neighbors_1_time_step.append(x)
|
||||||
num_neighbors_aware = len(aware_neighbors_1_time_step)
|
num_neighbors_aware = len(aware_neighbors_1_time_step)
|
||||||
if self.prob(self.imitation_prob * num_neighbors_aware):
|
if self.prob(self.imitation_prob * num_neighbors_aware):
|
||||||
self.state['id'] = 1
|
self.state["id"] = 1
|
||||||
self.state['sentimentCorrelation'] = 1
|
self.state["sentimentCorrelation"] = 1
|
||||||
else:
|
else:
|
||||||
pass
|
pass
|
||||||
|
|
||||||
|
|||||||
@@ -23,87 +23,100 @@ class SpreadModelM2(BaseAgent):
|
|||||||
def __init__(self, model=None, unique_id=0, state=()):
|
def __init__(self, model=None, unique_id=0, state=()):
|
||||||
super().__init__(model=environment, unique_id=unique_id, state=state)
|
super().__init__(model=environment, unique_id=unique_id, state=state)
|
||||||
|
|
||||||
|
|
||||||
# Use a single generator with the same seed as `self.random`
|
# Use a single generator with the same seed as `self.random`
|
||||||
random = np.random.default_rng(seed=self._seed)
|
random = np.random.default_rng(seed=self._seed)
|
||||||
self.prob_neutral_making_denier = random.normal(environment.environment_params['prob_neutral_making_denier'],
|
self.prob_neutral_making_denier = random.normal(
|
||||||
environment.environment_params['standard_variance'])
|
environment.environment_params["prob_neutral_making_denier"],
|
||||||
|
environment.environment_params["standard_variance"],
|
||||||
|
)
|
||||||
|
|
||||||
self.prob_infect = random.normal(environment.environment_params['prob_infect'],
|
self.prob_infect = random.normal(
|
||||||
environment.environment_params['standard_variance'])
|
environment.environment_params["prob_infect"],
|
||||||
|
environment.environment_params["standard_variance"],
|
||||||
|
)
|
||||||
|
|
||||||
self.prob_cured_healing_infected = random.normal(environment.environment_params['prob_cured_healing_infected'],
|
self.prob_cured_healing_infected = random.normal(
|
||||||
environment.environment_params['standard_variance'])
|
environment.environment_params["prob_cured_healing_infected"],
|
||||||
self.prob_cured_vaccinate_neutral = random.normal(environment.environment_params['prob_cured_vaccinate_neutral'],
|
environment.environment_params["standard_variance"],
|
||||||
environment.environment_params['standard_variance'])
|
)
|
||||||
|
self.prob_cured_vaccinate_neutral = random.normal(
|
||||||
|
environment.environment_params["prob_cured_vaccinate_neutral"],
|
||||||
|
environment.environment_params["standard_variance"],
|
||||||
|
)
|
||||||
|
|
||||||
self.prob_vaccinated_healing_infected = random.normal(environment.environment_params['prob_vaccinated_healing_infected'],
|
self.prob_vaccinated_healing_infected = random.normal(
|
||||||
environment.environment_params['standard_variance'])
|
environment.environment_params["prob_vaccinated_healing_infected"],
|
||||||
self.prob_vaccinated_vaccinate_neutral = random.normal(environment.environment_params['prob_vaccinated_vaccinate_neutral'],
|
environment.environment_params["standard_variance"],
|
||||||
environment.environment_params['standard_variance'])
|
)
|
||||||
self.prob_generate_anti_rumor = random.normal(environment.environment_params['prob_generate_anti_rumor'],
|
self.prob_vaccinated_vaccinate_neutral = random.normal(
|
||||||
environment.environment_params['standard_variance'])
|
environment.environment_params["prob_vaccinated_vaccinate_neutral"],
|
||||||
|
environment.environment_params["standard_variance"],
|
||||||
|
)
|
||||||
|
self.prob_generate_anti_rumor = random.normal(
|
||||||
|
environment.environment_params["prob_generate_anti_rumor"],
|
||||||
|
environment.environment_params["standard_variance"],
|
||||||
|
)
|
||||||
|
|
||||||
def step(self):
|
def step(self):
|
||||||
|
|
||||||
if self.state['id'] == 0: # Neutral
|
if self.state["id"] == 0: # Neutral
|
||||||
self.neutral_behaviour()
|
self.neutral_behaviour()
|
||||||
elif self.state['id'] == 1: # Infected
|
elif self.state["id"] == 1: # Infected
|
||||||
self.infected_behaviour()
|
self.infected_behaviour()
|
||||||
elif self.state['id'] == 2: # Cured
|
elif self.state["id"] == 2: # Cured
|
||||||
self.cured_behaviour()
|
self.cured_behaviour()
|
||||||
elif self.state['id'] == 3: # Vaccinated
|
elif self.state["id"] == 3: # Vaccinated
|
||||||
self.vaccinated_behaviour()
|
self.vaccinated_behaviour()
|
||||||
|
|
||||||
def neutral_behaviour(self):
|
def neutral_behaviour(self):
|
||||||
|
|
||||||
# Infected
|
# Infected
|
||||||
infected_neighbors = self.get_neighboring_agents(state_id=1)
|
infected_neighbors = self.get_neighbors(state_id=1)
|
||||||
if len(infected_neighbors) > 0:
|
if len(infected_neighbors) > 0:
|
||||||
if self.prob(self.prob_neutral_making_denier):
|
if self.prob(self.prob_neutral_making_denier):
|
||||||
self.state['id'] = 3 # Vaccinated making denier
|
self.state["id"] = 3 # Vaccinated making denier
|
||||||
|
|
||||||
def infected_behaviour(self):
|
def infected_behaviour(self):
|
||||||
|
|
||||||
# Neutral
|
# Neutral
|
||||||
neutral_neighbors = self.get_neighboring_agents(state_id=0)
|
neutral_neighbors = self.get_neighbors(state_id=0)
|
||||||
for neighbor in neutral_neighbors:
|
for neighbor in neutral_neighbors:
|
||||||
if self.prob(self.prob_infect):
|
if self.prob(self.prob_infect):
|
||||||
neighbor.state['id'] = 1 # Infected
|
neighbor.state["id"] = 1 # Infected
|
||||||
|
|
||||||
def cured_behaviour(self):
|
def cured_behaviour(self):
|
||||||
|
|
||||||
# Vaccinate
|
# Vaccinate
|
||||||
neutral_neighbors = self.get_neighboring_agents(state_id=0)
|
neutral_neighbors = self.get_neighbors(state_id=0)
|
||||||
for neighbor in neutral_neighbors:
|
for neighbor in neutral_neighbors:
|
||||||
if self.prob(self.prob_cured_vaccinate_neutral):
|
if self.prob(self.prob_cured_vaccinate_neutral):
|
||||||
neighbor.state['id'] = 3 # Vaccinated
|
neighbor.state["id"] = 3 # Vaccinated
|
||||||
|
|
||||||
# Cure
|
# Cure
|
||||||
infected_neighbors = self.get_neighboring_agents(state_id=1)
|
infected_neighbors = self.get_neighbors(state_id=1)
|
||||||
for neighbor in infected_neighbors:
|
for neighbor in infected_neighbors:
|
||||||
if self.prob(self.prob_cured_healing_infected):
|
if self.prob(self.prob_cured_healing_infected):
|
||||||
neighbor.state['id'] = 2 # Cured
|
neighbor.state["id"] = 2 # Cured
|
||||||
|
|
||||||
def vaccinated_behaviour(self):
|
def vaccinated_behaviour(self):
|
||||||
|
|
||||||
# Cure
|
# Cure
|
||||||
infected_neighbors = self.get_neighboring_agents(state_id=1)
|
infected_neighbors = self.get_neighbors(state_id=1)
|
||||||
for neighbor in infected_neighbors:
|
for neighbor in infected_neighbors:
|
||||||
if self.prob(self.prob_cured_healing_infected):
|
if self.prob(self.prob_cured_healing_infected):
|
||||||
neighbor.state['id'] = 2 # Cured
|
neighbor.state["id"] = 2 # Cured
|
||||||
|
|
||||||
# Vaccinate
|
# Vaccinate
|
||||||
neutral_neighbors = self.get_neighboring_agents(state_id=0)
|
neutral_neighbors = self.get_neighbors(state_id=0)
|
||||||
for neighbor in neutral_neighbors:
|
for neighbor in neutral_neighbors:
|
||||||
if self.prob(self.prob_cured_vaccinate_neutral):
|
if self.prob(self.prob_cured_vaccinate_neutral):
|
||||||
neighbor.state['id'] = 3 # Vaccinated
|
neighbor.state["id"] = 3 # Vaccinated
|
||||||
|
|
||||||
# Generate anti-rumor
|
# Generate anti-rumor
|
||||||
infected_neighbors_2 = self.get_neighboring_agents(state_id=1)
|
infected_neighbors_2 = self.get_neighbors(state_id=1)
|
||||||
for neighbor in infected_neighbors_2:
|
for neighbor in infected_neighbors_2:
|
||||||
if self.prob(self.prob_generate_anti_rumor):
|
if self.prob(self.prob_generate_anti_rumor):
|
||||||
neighbor.state['id'] = 2 # Cured
|
neighbor.state["id"] = 2 # Cured
|
||||||
|
|
||||||
|
|
||||||
class ControlModelM2(BaseAgent):
|
class ControlModelM2(BaseAgent):
|
||||||
@@ -124,121 +137,134 @@ class ControlModelM2(BaseAgent):
|
|||||||
prob_generate_anti_rumor
|
prob_generate_anti_rumor
|
||||||
"""
|
"""
|
||||||
|
|
||||||
|
|
||||||
def __init__(self, model=None, unique_id=0, state=()):
|
def __init__(self, model=None, unique_id=0, state=()):
|
||||||
super().__init__(model=environment, unique_id=unique_id, state=state)
|
super().__init__(model=environment, unique_id=unique_id, state=state)
|
||||||
|
|
||||||
self.prob_neutral_making_denier = np.random.normal(environment.environment_params['prob_neutral_making_denier'],
|
self.prob_neutral_making_denier = np.random.normal(
|
||||||
environment.environment_params['standard_variance'])
|
environment.environment_params["prob_neutral_making_denier"],
|
||||||
|
environment.environment_params["standard_variance"],
|
||||||
|
)
|
||||||
|
|
||||||
self.prob_infect = np.random.normal(environment.environment_params['prob_infect'],
|
self.prob_infect = np.random.normal(
|
||||||
environment.environment_params['standard_variance'])
|
environment.environment_params["prob_infect"],
|
||||||
|
environment.environment_params["standard_variance"],
|
||||||
|
)
|
||||||
|
|
||||||
self.prob_cured_healing_infected = np.random.normal(environment.environment_params['prob_cured_healing_infected'],
|
self.prob_cured_healing_infected = np.random.normal(
|
||||||
environment.environment_params['standard_variance'])
|
environment.environment_params["prob_cured_healing_infected"],
|
||||||
self.prob_cured_vaccinate_neutral = np.random.normal(environment.environment_params['prob_cured_vaccinate_neutral'],
|
environment.environment_params["standard_variance"],
|
||||||
environment.environment_params['standard_variance'])
|
)
|
||||||
|
self.prob_cured_vaccinate_neutral = np.random.normal(
|
||||||
|
environment.environment_params["prob_cured_vaccinate_neutral"],
|
||||||
|
environment.environment_params["standard_variance"],
|
||||||
|
)
|
||||||
|
|
||||||
self.prob_vaccinated_healing_infected = np.random.normal(environment.environment_params['prob_vaccinated_healing_infected'],
|
self.prob_vaccinated_healing_infected = np.random.normal(
|
||||||
environment.environment_params['standard_variance'])
|
environment.environment_params["prob_vaccinated_healing_infected"],
|
||||||
self.prob_vaccinated_vaccinate_neutral = np.random.normal(environment.environment_params['prob_vaccinated_vaccinate_neutral'],
|
environment.environment_params["standard_variance"],
|
||||||
environment.environment_params['standard_variance'])
|
)
|
||||||
self.prob_generate_anti_rumor = np.random.normal(environment.environment_params['prob_generate_anti_rumor'],
|
self.prob_vaccinated_vaccinate_neutral = np.random.normal(
|
||||||
environment.environment_params['standard_variance'])
|
environment.environment_params["prob_vaccinated_vaccinate_neutral"],
|
||||||
|
environment.environment_params["standard_variance"],
|
||||||
|
)
|
||||||
|
self.prob_generate_anti_rumor = np.random.normal(
|
||||||
|
environment.environment_params["prob_generate_anti_rumor"],
|
||||||
|
environment.environment_params["standard_variance"],
|
||||||
|
)
|
||||||
|
|
||||||
def step(self):
|
def step(self):
|
||||||
|
|
||||||
if self.state['id'] == 0: # Neutral
|
if self.state["id"] == 0: # Neutral
|
||||||
self.neutral_behaviour()
|
self.neutral_behaviour()
|
||||||
elif self.state['id'] == 1: # Infected
|
elif self.state["id"] == 1: # Infected
|
||||||
self.infected_behaviour()
|
self.infected_behaviour()
|
||||||
elif self.state['id'] == 2: # Cured
|
elif self.state["id"] == 2: # Cured
|
||||||
self.cured_behaviour()
|
self.cured_behaviour()
|
||||||
elif self.state['id'] == 3: # Vaccinated
|
elif self.state["id"] == 3: # Vaccinated
|
||||||
self.vaccinated_behaviour()
|
self.vaccinated_behaviour()
|
||||||
elif self.state['id'] == 4: # Beacon-off
|
elif self.state["id"] == 4: # Beacon-off
|
||||||
self.beacon_off_behaviour()
|
self.beacon_off_behaviour()
|
||||||
elif self.state['id'] == 5: # Beacon-on
|
elif self.state["id"] == 5: # Beacon-on
|
||||||
self.beacon_on_behaviour()
|
self.beacon_on_behaviour()
|
||||||
|
|
||||||
def neutral_behaviour(self):
|
def neutral_behaviour(self):
|
||||||
self.state['visible'] = False
|
self.state["visible"] = False
|
||||||
|
|
||||||
# Infected
|
# Infected
|
||||||
infected_neighbors = self.get_neighboring_agents(state_id=1)
|
infected_neighbors = self.get_neighbors(state_id=1)
|
||||||
if len(infected_neighbors) > 0:
|
if len(infected_neighbors) > 0:
|
||||||
if self.random(self.prob_neutral_making_denier):
|
if self.random(self.prob_neutral_making_denier):
|
||||||
self.state['id'] = 3 # Vaccinated making denier
|
self.state["id"] = 3 # Vaccinated making denier
|
||||||
|
|
||||||
def infected_behaviour(self):
|
def infected_behaviour(self):
|
||||||
|
|
||||||
# Neutral
|
# Neutral
|
||||||
neutral_neighbors = self.get_neighboring_agents(state_id=0)
|
neutral_neighbors = self.get_neighbors(state_id=0)
|
||||||
for neighbor in neutral_neighbors:
|
for neighbor in neutral_neighbors:
|
||||||
if self.prob(self.prob_infect):
|
if self.prob(self.prob_infect):
|
||||||
neighbor.state['id'] = 1 # Infected
|
neighbor.state["id"] = 1 # Infected
|
||||||
self.state['visible'] = False
|
self.state["visible"] = False
|
||||||
|
|
||||||
def cured_behaviour(self):
|
def cured_behaviour(self):
|
||||||
|
|
||||||
self.state['visible'] = True
|
self.state["visible"] = True
|
||||||
# Vaccinate
|
# Vaccinate
|
||||||
neutral_neighbors = self.get_neighboring_agents(state_id=0)
|
neutral_neighbors = self.get_neighbors(state_id=0)
|
||||||
for neighbor in neutral_neighbors:
|
for neighbor in neutral_neighbors:
|
||||||
if self.prob(self.prob_cured_vaccinate_neutral):
|
if self.prob(self.prob_cured_vaccinate_neutral):
|
||||||
neighbor.state['id'] = 3 # Vaccinated
|
neighbor.state["id"] = 3 # Vaccinated
|
||||||
|
|
||||||
# Cure
|
# Cure
|
||||||
infected_neighbors = self.get_neighboring_agents(state_id=1)
|
infected_neighbors = self.get_neighbors(state_id=1)
|
||||||
for neighbor in infected_neighbors:
|
for neighbor in infected_neighbors:
|
||||||
if self.prob(self.prob_cured_healing_infected):
|
if self.prob(self.prob_cured_healing_infected):
|
||||||
neighbor.state['id'] = 2 # Cured
|
neighbor.state["id"] = 2 # Cured
|
||||||
|
|
||||||
def vaccinated_behaviour(self):
|
def vaccinated_behaviour(self):
|
||||||
self.state['visible'] = True
|
self.state["visible"] = True
|
||||||
|
|
||||||
# Cure
|
# Cure
|
||||||
infected_neighbors = self.get_neighboring_agents(state_id=1)
|
infected_neighbors = self.get_neighbors(state_id=1)
|
||||||
for neighbor in infected_neighbors:
|
for neighbor in infected_neighbors:
|
||||||
if self.prob(self.prob_cured_healing_infected):
|
if self.prob(self.prob_cured_healing_infected):
|
||||||
neighbor.state['id'] = 2 # Cured
|
neighbor.state["id"] = 2 # Cured
|
||||||
|
|
||||||
# Vaccinate
|
# Vaccinate
|
||||||
neutral_neighbors = self.get_neighboring_agents(state_id=0)
|
neutral_neighbors = self.get_neighbors(state_id=0)
|
||||||
for neighbor in neutral_neighbors:
|
for neighbor in neutral_neighbors:
|
||||||
if self.prob(self.prob_cured_vaccinate_neutral):
|
if self.prob(self.prob_cured_vaccinate_neutral):
|
||||||
neighbor.state['id'] = 3 # Vaccinated
|
neighbor.state["id"] = 3 # Vaccinated
|
||||||
|
|
||||||
# Generate anti-rumor
|
# Generate anti-rumor
|
||||||
infected_neighbors_2 = self.get_neighboring_agents(state_id=1)
|
infected_neighbors_2 = self.get_neighbors(state_id=1)
|
||||||
for neighbor in infected_neighbors_2:
|
for neighbor in infected_neighbors_2:
|
||||||
if self.prob(self.prob_generate_anti_rumor):
|
if self.prob(self.prob_generate_anti_rumor):
|
||||||
neighbor.state['id'] = 2 # Cured
|
neighbor.state["id"] = 2 # Cured
|
||||||
|
|
||||||
def beacon_off_behaviour(self):
|
def beacon_off_behaviour(self):
|
||||||
self.state['visible'] = False
|
self.state["visible"] = False
|
||||||
infected_neighbors = self.get_neighboring_agents(state_id=1)
|
infected_neighbors = self.get_neighbors(state_id=1)
|
||||||
if len(infected_neighbors) > 0:
|
if len(infected_neighbors) > 0:
|
||||||
self.state['id'] == 5 # Beacon on
|
self.state["id"] == 5 # Beacon on
|
||||||
|
|
||||||
def beacon_on_behaviour(self):
|
def beacon_on_behaviour(self):
|
||||||
self.state['visible'] = False
|
self.state["visible"] = False
|
||||||
# Cure (M2 feature added)
|
# Cure (M2 feature added)
|
||||||
infected_neighbors = self.get_neighboring_agents(state_id=1)
|
infected_neighbors = self.get_neighbors(state_id=1)
|
||||||
for neighbor in infected_neighbors:
|
for neighbor in infected_neighbors:
|
||||||
if self.prob(self.prob_generate_anti_rumor):
|
if self.prob(self.prob_generate_anti_rumor):
|
||||||
neighbor.state['id'] = 2 # Cured
|
neighbor.state["id"] = 2 # Cured
|
||||||
neutral_neighbors_infected = neighbor.get_neighboring_agents(state_id=0)
|
neutral_neighbors_infected = neighbor.get_neighbors(state_id=0)
|
||||||
for neighbor in neutral_neighbors_infected:
|
for neighbor in neutral_neighbors_infected:
|
||||||
if self.prob(self.prob_generate_anti_rumor):
|
if self.prob(self.prob_generate_anti_rumor):
|
||||||
neighbor.state['id'] = 3 # Vaccinated
|
neighbor.state["id"] = 3 # Vaccinated
|
||||||
infected_neighbors_infected = neighbor.get_neighboring_agents(state_id=1)
|
infected_neighbors_infected = neighbor.get_neighbors(state_id=1)
|
||||||
for neighbor in infected_neighbors_infected:
|
for neighbor in infected_neighbors_infected:
|
||||||
if self.prob(self.prob_generate_anti_rumor):
|
if self.prob(self.prob_generate_anti_rumor):
|
||||||
neighbor.state['id'] = 2 # Cured
|
neighbor.state["id"] = 2 # Cured
|
||||||
|
|
||||||
# Vaccinate
|
# Vaccinate
|
||||||
neutral_neighbors = self.get_neighboring_agents(state_id=0)
|
neutral_neighbors = self.get_neighbors(state_id=0)
|
||||||
for neighbor in neutral_neighbors:
|
for neighbor in neutral_neighbors:
|
||||||
if self.prob(self.prob_cured_vaccinate_neutral):
|
if self.prob(self.prob_cured_vaccinate_neutral):
|
||||||
neighbor.state['id'] = 3 # Vaccinated
|
neighbor.state["id"] = 3 # Vaccinated
|
||||||
|
|||||||
@@ -33,24 +33,32 @@ class SISaModel(FSM):
|
|||||||
|
|
||||||
random = np.random.default_rng(seed=self._seed)
|
random = np.random.default_rng(seed=self._seed)
|
||||||
|
|
||||||
self.neutral_discontent_spon_prob = random.normal(self.env['neutral_discontent_spon_prob'],
|
self.neutral_discontent_spon_prob = random.normal(
|
||||||
self.env['standard_variance'])
|
self.env["neutral_discontent_spon_prob"], self.env["standard_variance"]
|
||||||
self.neutral_discontent_infected_prob = random.normal(self.env['neutral_discontent_infected_prob'],
|
)
|
||||||
self.env['standard_variance'])
|
self.neutral_discontent_infected_prob = random.normal(
|
||||||
self.neutral_content_spon_prob = random.normal(self.env['neutral_content_spon_prob'],
|
self.env["neutral_discontent_infected_prob"], self.env["standard_variance"]
|
||||||
self.env['standard_variance'])
|
)
|
||||||
self.neutral_content_infected_prob = random.normal(self.env['neutral_content_infected_prob'],
|
self.neutral_content_spon_prob = random.normal(
|
||||||
self.env['standard_variance'])
|
self.env["neutral_content_spon_prob"], self.env["standard_variance"]
|
||||||
|
)
|
||||||
|
self.neutral_content_infected_prob = random.normal(
|
||||||
|
self.env["neutral_content_infected_prob"], self.env["standard_variance"]
|
||||||
|
)
|
||||||
|
|
||||||
self.discontent_neutral = random.normal(self.env['discontent_neutral'],
|
self.discontent_neutral = random.normal(
|
||||||
self.env['standard_variance'])
|
self.env["discontent_neutral"], self.env["standard_variance"]
|
||||||
self.discontent_content = random.normal(self.env['discontent_content'],
|
)
|
||||||
self.env['variance_d_c'])
|
self.discontent_content = random.normal(
|
||||||
|
self.env["discontent_content"], self.env["variance_d_c"]
|
||||||
|
)
|
||||||
|
|
||||||
self.content_discontent = random.normal(self.env['content_discontent'],
|
self.content_discontent = random.normal(
|
||||||
self.env['variance_c_d'])
|
self.env["content_discontent"], self.env["variance_c_d"]
|
||||||
self.content_neutral = random.normal(self.env['content_neutral'],
|
)
|
||||||
self.env['standard_variance'])
|
self.content_neutral = random.normal(
|
||||||
|
self.env["content_neutral"], self.env["standard_variance"]
|
||||||
|
)
|
||||||
|
|
||||||
@state
|
@state
|
||||||
def neutral(self):
|
def neutral(self):
|
||||||
@@ -61,10 +69,10 @@ class SISaModel(FSM):
|
|||||||
return self.content
|
return self.content
|
||||||
|
|
||||||
# Infected
|
# Infected
|
||||||
discontent_neighbors = self.count_neighboring_agents(state_id=self.discontent)
|
discontent_neighbors = self.count_neighbors(state_id=self.discontent)
|
||||||
if self.prob(scontent_neighbors * self.neutral_discontent_infected_prob):
|
if self.prob(scontent_neighbors * self.neutral_discontent_infected_prob):
|
||||||
return self.discontent
|
return self.discontent
|
||||||
content_neighbors = self.count_neighboring_agents(state_id=self.content.id)
|
content_neighbors = self.count_neighbors(state_id=self.content.id)
|
||||||
if self.prob(s * self.neutral_content_infected_prob):
|
if self.prob(s * self.neutral_content_infected_prob):
|
||||||
return self.content
|
return self.content
|
||||||
return self.neutral
|
return self.neutral
|
||||||
@@ -76,7 +84,7 @@ class SISaModel(FSM):
|
|||||||
return self.neutral
|
return self.neutral
|
||||||
|
|
||||||
# Superinfected
|
# Superinfected
|
||||||
content_neighbors = self.count_neighboring_agents(state_id=self.content.id)
|
content_neighbors = self.count_neighbors(state_id=self.content.id)
|
||||||
if self.prob(s * self.discontent_content):
|
if self.prob(s * self.discontent_content):
|
||||||
return self.content
|
return self.content
|
||||||
return self.discontent
|
return self.discontent
|
||||||
@@ -88,7 +96,7 @@ class SISaModel(FSM):
|
|||||||
return self.neutral
|
return self.neutral
|
||||||
|
|
||||||
# Superinfected
|
# Superinfected
|
||||||
discontent_neighbors = self.count_neighboring_agents(state_id=self.discontent.id)
|
discontent_neighbors = self.count_neighbors(state_id=self.discontent.id)
|
||||||
if self.prob(scontent_neighbors * self.content_discontent):
|
if self.prob(scontent_neighbors * self.content_discontent):
|
||||||
self.discontent
|
self.discontent
|
||||||
return self.content
|
return self.content
|
||||||
|
|||||||
@@ -17,15 +17,19 @@ class SentimentCorrelationModel(BaseAgent):
|
|||||||
|
|
||||||
def __init__(self, environment, unique_id=0, state=()):
|
def __init__(self, environment, unique_id=0, state=()):
|
||||||
super().__init__(model=environment, unique_id=unique_id, state=state)
|
super().__init__(model=environment, unique_id=unique_id, state=state)
|
||||||
self.outside_effects_prob = environment.environment_params['outside_effects_prob']
|
self.outside_effects_prob = environment.environment_params[
|
||||||
self.anger_prob = environment.environment_params['anger_prob']
|
"outside_effects_prob"
|
||||||
self.joy_prob = environment.environment_params['joy_prob']
|
]
|
||||||
self.sadness_prob = environment.environment_params['sadness_prob']
|
self.anger_prob = environment.environment_params["anger_prob"]
|
||||||
self.disgust_prob = environment.environment_params['disgust_prob']
|
self.joy_prob = environment.environment_params["joy_prob"]
|
||||||
self.state['time_awareness'] = []
|
self.sadness_prob = environment.environment_params["sadness_prob"]
|
||||||
|
self.disgust_prob = environment.environment_params["disgust_prob"]
|
||||||
|
self.state["time_awareness"] = []
|
||||||
for i in range(4): # In this model we have 4 sentiments
|
for i in range(4): # In this model we have 4 sentiments
|
||||||
self.state['time_awareness'].append(0) # 0-> Anger, 1-> joy, 2->sadness, 3 -> disgust
|
self.state["time_awareness"].append(
|
||||||
self.state['sentimentCorrelation'] = 0
|
0
|
||||||
|
) # 0-> Anger, 1-> joy, 2->sadness, 3 -> disgust
|
||||||
|
self.state["sentimentCorrelation"] = 0
|
||||||
|
|
||||||
def step(self):
|
def step(self):
|
||||||
self.behaviour()
|
self.behaviour()
|
||||||
@@ -37,65 +41,75 @@ class SentimentCorrelationModel(BaseAgent):
|
|||||||
sad_neighbors_1_time_step = []
|
sad_neighbors_1_time_step = []
|
||||||
disgusted_neighbors_1_time_step = []
|
disgusted_neighbors_1_time_step = []
|
||||||
|
|
||||||
angry_neighbors = self.get_neighboring_agents(state_id=1)
|
angry_neighbors = self.get_neighbors(state_id=1)
|
||||||
for x in angry_neighbors:
|
for x in angry_neighbors:
|
||||||
if x.state['time_awareness'][0] > (self.env.now-500):
|
if x.state["time_awareness"][0] > (self.env.now - 500):
|
||||||
angry_neighbors_1_time_step.append(x)
|
angry_neighbors_1_time_step.append(x)
|
||||||
num_neighbors_angry = len(angry_neighbors_1_time_step)
|
num_neighbors_angry = len(angry_neighbors_1_time_step)
|
||||||
|
|
||||||
joyful_neighbors = self.get_neighboring_agents(state_id=2)
|
joyful_neighbors = self.get_neighbors(state_id=2)
|
||||||
for x in joyful_neighbors:
|
for x in joyful_neighbors:
|
||||||
if x.state['time_awareness'][1] > (self.env.now-500):
|
if x.state["time_awareness"][1] > (self.env.now - 500):
|
||||||
joyful_neighbors_1_time_step.append(x)
|
joyful_neighbors_1_time_step.append(x)
|
||||||
num_neighbors_joyful = len(joyful_neighbors_1_time_step)
|
num_neighbors_joyful = len(joyful_neighbors_1_time_step)
|
||||||
|
|
||||||
sad_neighbors = self.get_neighboring_agents(state_id=3)
|
sad_neighbors = self.get_neighbors(state_id=3)
|
||||||
for x in sad_neighbors:
|
for x in sad_neighbors:
|
||||||
if x.state['time_awareness'][2] > (self.env.now-500):
|
if x.state["time_awareness"][2] > (self.env.now - 500):
|
||||||
sad_neighbors_1_time_step.append(x)
|
sad_neighbors_1_time_step.append(x)
|
||||||
num_neighbors_sad = len(sad_neighbors_1_time_step)
|
num_neighbors_sad = len(sad_neighbors_1_time_step)
|
||||||
|
|
||||||
disgusted_neighbors = self.get_neighboring_agents(state_id=4)
|
disgusted_neighbors = self.get_neighbors(state_id=4)
|
||||||
for x in disgusted_neighbors:
|
for x in disgusted_neighbors:
|
||||||
if x.state['time_awareness'][3] > (self.env.now-500):
|
if x.state["time_awareness"][3] > (self.env.now - 500):
|
||||||
disgusted_neighbors_1_time_step.append(x)
|
disgusted_neighbors_1_time_step.append(x)
|
||||||
num_neighbors_disgusted = len(disgusted_neighbors_1_time_step)
|
num_neighbors_disgusted = len(disgusted_neighbors_1_time_step)
|
||||||
|
|
||||||
anger_prob = self.anger_prob+(len(angry_neighbors_1_time_step)*self.anger_prob)
|
anger_prob = self.anger_prob + (
|
||||||
|
len(angry_neighbors_1_time_step) * self.anger_prob
|
||||||
|
)
|
||||||
joy_prob = self.joy_prob + (len(joyful_neighbors_1_time_step) * self.joy_prob)
|
joy_prob = self.joy_prob + (len(joyful_neighbors_1_time_step) * self.joy_prob)
|
||||||
sadness_prob = self.sadness_prob+(len(sad_neighbors_1_time_step)*self.sadness_prob)
|
sadness_prob = self.sadness_prob + (
|
||||||
disgust_prob = self.disgust_prob+(len(disgusted_neighbors_1_time_step)*self.disgust_prob)
|
len(sad_neighbors_1_time_step) * self.sadness_prob
|
||||||
|
)
|
||||||
|
disgust_prob = self.disgust_prob + (
|
||||||
|
len(disgusted_neighbors_1_time_step) * self.disgust_prob
|
||||||
|
)
|
||||||
outside_effects_prob = self.outside_effects_prob
|
outside_effects_prob = self.outside_effects_prob
|
||||||
|
|
||||||
num = self.random.random()
|
num = self.random.random()
|
||||||
|
|
||||||
if num < outside_effects_prob:
|
if num < outside_effects_prob:
|
||||||
self.state['id'] = self.random.randint(1, 4)
|
self.state["id"] = self.random.randint(1, 4)
|
||||||
|
|
||||||
self.state['sentimentCorrelation'] = self.state['id'] # It is stored when it has been infected for the dynamic network
|
self.state["sentimentCorrelation"] = self.state[
|
||||||
self.state['time_awareness'][self.state['id']-1] = self.env.now
|
"id"
|
||||||
self.state['sentiment'] = self.state['id']
|
] # It is stored when it has been infected for the dynamic network
|
||||||
|
self.state["time_awareness"][self.state["id"] - 1] = self.env.now
|
||||||
|
self.state["sentiment"] = self.state["id"]
|
||||||
|
|
||||||
|
if num < anger_prob:
|
||||||
|
|
||||||
if(num<anger_prob):
|
self.state["id"] = 1
|
||||||
|
self.state["sentimentCorrelation"] = 1
|
||||||
|
self.state["time_awareness"][self.state["id"] - 1] = self.env.now
|
||||||
|
elif num < joy_prob + anger_prob and num > anger_prob:
|
||||||
|
|
||||||
self.state['id'] = 1
|
self.state["id"] = 2
|
||||||
self.state['sentimentCorrelation'] = 1
|
self.state["sentimentCorrelation"] = 2
|
||||||
self.state['time_awareness'][self.state['id']-1] = self.env.now
|
self.state["time_awareness"][self.state["id"] - 1] = self.env.now
|
||||||
elif (num<joy_prob+anger_prob and num>anger_prob):
|
elif num < sadness_prob + anger_prob + joy_prob and num > joy_prob + anger_prob:
|
||||||
|
|
||||||
self.state['id'] = 2
|
self.state["id"] = 3
|
||||||
self.state['sentimentCorrelation'] = 2
|
self.state["sentimentCorrelation"] = 3
|
||||||
self.state['time_awareness'][self.state['id']-1] = self.env.now
|
self.state["time_awareness"][self.state["id"] - 1] = self.env.now
|
||||||
elif (num<sadness_prob+anger_prob+joy_prob and num>joy_prob+anger_prob):
|
elif (
|
||||||
|
num < disgust_prob + sadness_prob + anger_prob + joy_prob
|
||||||
|
and num > sadness_prob + anger_prob + joy_prob
|
||||||
|
):
|
||||||
|
|
||||||
self.state['id'] = 3
|
self.state["id"] = 4
|
||||||
self.state['sentimentCorrelation'] = 3
|
self.state["sentimentCorrelation"] = 4
|
||||||
self.state['time_awareness'][self.state['id']-1] = self.env.now
|
self.state["time_awareness"][self.state["id"] - 1] = self.env.now
|
||||||
elif (num<disgust_prob+sadness_prob+anger_prob+joy_prob and num>sadness_prob+anger_prob+joy_prob):
|
|
||||||
|
|
||||||
self.state['id'] = 4
|
self.state["sentiment"] = self.state["id"]
|
||||||
self.state['sentimentCorrelation'] = 4
|
|
||||||
self.state['time_awareness'][self.state['id']-1] = self.env.now
|
|
||||||
|
|
||||||
self.state['sentiment'] = self.state['id']
|
|
||||||
|
|||||||
@@ -20,17 +20,13 @@ from typing import Dict, List
|
|||||||
from .. import serialization, utils, time, config
|
from .. import serialization, utils, time, config
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
def as_node(agent):
|
def as_node(agent):
|
||||||
if isinstance(agent, BaseAgent):
|
if isinstance(agent, BaseAgent):
|
||||||
return agent.id
|
return agent.id
|
||||||
return agent
|
return agent
|
||||||
|
|
||||||
IGNORED_FIELDS = ('model', 'logger')
|
|
||||||
|
|
||||||
|
IGNORED_FIELDS = ("model", "logger")
|
||||||
class DeadAgent(Exception):
|
|
||||||
pass
|
|
||||||
|
|
||||||
|
|
||||||
class MetaAgent(ABCMeta):
|
class MetaAgent(ABCMeta):
|
||||||
@@ -43,13 +39,44 @@ class MetaAgent(ABCMeta):
|
|||||||
defaults.update(i._defaults)
|
defaults.update(i._defaults)
|
||||||
|
|
||||||
new_nmspc = {
|
new_nmspc = {
|
||||||
'_defaults': defaults,
|
"_defaults": defaults,
|
||||||
|
"_last_return": None,
|
||||||
|
"_last_except": None,
|
||||||
}
|
}
|
||||||
|
|
||||||
for attr, func in namespace.items():
|
for attr, func in namespace.items():
|
||||||
if isinstance(func, types.FunctionType) or isinstance(func, property) or attr[0] == '_':
|
if attr == "step" and inspect.isgeneratorfunction(func):
|
||||||
|
orig_func = func
|
||||||
|
new_nmspc["_coroutine"] = None
|
||||||
|
|
||||||
|
@wraps(func)
|
||||||
|
def func(self):
|
||||||
|
while True:
|
||||||
|
if not self._coroutine:
|
||||||
|
self._coroutine = orig_func(self)
|
||||||
|
try:
|
||||||
|
if self._last_except:
|
||||||
|
return self._coroutine.throw(self._last_except)
|
||||||
|
else:
|
||||||
|
return self._coroutine.send(self._last_return)
|
||||||
|
except StopIteration as ex:
|
||||||
|
self._coroutine = None
|
||||||
|
return ex.value
|
||||||
|
finally:
|
||||||
|
self._last_return = None
|
||||||
|
self._last_except = None
|
||||||
|
|
||||||
|
func.id = name or func.__name__
|
||||||
|
func.is_default = False
|
||||||
new_nmspc[attr] = func
|
new_nmspc[attr] = func
|
||||||
elif attr == 'defaults':
|
elif (
|
||||||
|
isinstance(func, types.FunctionType)
|
||||||
|
or isinstance(func, property)
|
||||||
|
or isinstance(func, classmethod)
|
||||||
|
or attr[0] == "_"
|
||||||
|
):
|
||||||
|
new_nmspc[attr] = func
|
||||||
|
elif attr == "defaults":
|
||||||
defaults.update(func)
|
defaults.update(func)
|
||||||
else:
|
else:
|
||||||
defaults[attr] = copy(func)
|
defaults[attr] = copy(func)
|
||||||
@@ -69,12 +96,7 @@ class BaseAgent(MesaAgent, MutableMapping, metaclass=MetaAgent):
|
|||||||
Any attribute that is not preceded by an underscore (`_`) will also be added to its state.
|
Any attribute that is not preceded by an underscore (`_`) will also be added to its state.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
def __init__(self,
|
def __init__(self, unique_id, model, name=None, interval=None, **kwargs):
|
||||||
unique_id,
|
|
||||||
model,
|
|
||||||
name=None,
|
|
||||||
interval=None,
|
|
||||||
**kwargs):
|
|
||||||
# Check for REQUIRED arguments
|
# Check for REQUIRED arguments
|
||||||
# Initialize agent parameters
|
# Initialize agent parameters
|
||||||
if isinstance(unique_id, MesaAgent):
|
if isinstance(unique_id, MesaAgent):
|
||||||
@@ -82,16 +104,19 @@ class BaseAgent(MesaAgent, MutableMapping, metaclass=MetaAgent):
|
|||||||
assert isinstance(unique_id, int)
|
assert isinstance(unique_id, int)
|
||||||
super().__init__(unique_id=unique_id, model=model)
|
super().__init__(unique_id=unique_id, model=model)
|
||||||
|
|
||||||
self.name = str(name) if name else'{}[{}]'.format(type(self).__name__, self.unique_id)
|
self.name = (
|
||||||
|
str(name) if name else "{}[{}]".format(type(self).__name__, self.unique_id)
|
||||||
|
)
|
||||||
|
|
||||||
self.alive = True
|
self.alive = True
|
||||||
|
|
||||||
self.interval = interval or self.get('interval', 1)
|
self.interval = interval or self.get("interval", 1)
|
||||||
logger = utils.logger.getChild(getattr(self.model, 'id', self.model)).getChild(self.name)
|
logger = utils.logger.getChild(getattr(self.model, "id", self.model)).getChild(
|
||||||
self.logger = logging.LoggerAdapter(logger, {'agent_name': self.name})
|
self.name
|
||||||
|
)
|
||||||
|
self.logger = logging.LoggerAdapter(logger, {"agent_name": self.name})
|
||||||
|
|
||||||
if hasattr(self, 'level'):
|
if hasattr(self, "level"):
|
||||||
self.logger.setLevel(self.level)
|
self.logger.setLevel(self.level)
|
||||||
|
|
||||||
for (k, v) in self._defaults.items():
|
for (k, v) in self._defaults.items():
|
||||||
@@ -113,27 +138,26 @@ class BaseAgent(MesaAgent, MutableMapping, metaclass=MetaAgent):
|
|||||||
def id(self):
|
def id(self):
|
||||||
return self.unique_id
|
return self.unique_id
|
||||||
|
|
||||||
@property
|
@classmethod
|
||||||
def state(self):
|
def from_dict(cls, model, attrs, warn_extra=True):
|
||||||
'''
|
ignored = {}
|
||||||
Return the agent itself, which behaves as a dictionary.
|
args = {}
|
||||||
|
for k, v in attrs.items():
|
||||||
This method shouldn't be used, but is kept here for backwards compatibility.
|
if k in inspect.signature(cls).parameters:
|
||||||
'''
|
args[k] = v
|
||||||
return self
|
else:
|
||||||
|
ignored[k] = v
|
||||||
@state.setter
|
if ignored and warn_extra:
|
||||||
def state(self, value):
|
utils.logger.info(
|
||||||
if not value:
|
f"Ignoring the following arguments for agent class { agent_class.__name__ }: { ignored }"
|
||||||
return
|
)
|
||||||
for k, v in value.items():
|
return cls(model=model, **args)
|
||||||
self[k] = v
|
|
||||||
|
|
||||||
def __getitem__(self, key):
|
def __getitem__(self, key):
|
||||||
try:
|
try:
|
||||||
return getattr(self, key)
|
return getattr(self, key)
|
||||||
except AttributeError:
|
except AttributeError:
|
||||||
raise KeyError(f'key {key} not found in agent')
|
raise KeyError(f"key {key} not found in agent")
|
||||||
|
|
||||||
def __delitem__(self, key):
|
def __delitem__(self, key):
|
||||||
return delattr(self, key)
|
return delattr(self, key)
|
||||||
@@ -151,7 +175,7 @@ class BaseAgent(MesaAgent, MutableMapping, metaclass=MetaAgent):
|
|||||||
return self.items()
|
return self.items()
|
||||||
|
|
||||||
def keys(self):
|
def keys(self):
|
||||||
return (k for k in self.__dict__ if k[0] != '_' and k not in IGNORED_FIELDS)
|
return (k for k in self.__dict__ if k[0] != "_" and k not in IGNORED_FIELDS)
|
||||||
|
|
||||||
def items(self, keys=None, skip=None):
|
def items(self, keys=None, skip=None):
|
||||||
keys = keys if keys is not None else self.keys()
|
keys = keys if keys is not None else self.keys()
|
||||||
@@ -172,13 +196,17 @@ class BaseAgent(MesaAgent, MutableMapping, metaclass=MetaAgent):
|
|||||||
return None
|
return None
|
||||||
|
|
||||||
def die(self):
|
def die(self):
|
||||||
self.info(f'agent dying')
|
self.info(f"agent dying")
|
||||||
self.alive = False
|
self.alive = False
|
||||||
|
try:
|
||||||
|
self.model.schedule.remove(self)
|
||||||
|
except KeyError:
|
||||||
|
pass
|
||||||
return time.NEVER
|
return time.NEVER
|
||||||
|
|
||||||
def step(self):
|
def step(self):
|
||||||
if not self.alive:
|
if not self.alive:
|
||||||
raise DeadAgent(self.unique_id)
|
raise time.DeadAgent(self.unique_id)
|
||||||
return super().step() or time.Delta(self.interval)
|
return super().step() or time.Delta(self.interval)
|
||||||
|
|
||||||
def log(self, message, *args, level=logging.INFO, **kwargs):
|
def log(self, message, *args, level=logging.INFO, **kwargs):
|
||||||
@@ -189,9 +217,9 @@ class BaseAgent(MesaAgent, MutableMapping, metaclass=MetaAgent):
|
|||||||
for k, v in kwargs:
|
for k, v in kwargs:
|
||||||
message += " {k}={v} ".format(k, v)
|
message += " {k}={v} ".format(k, v)
|
||||||
extra = {}
|
extra = {}
|
||||||
extra['now'] = self.now
|
extra["now"] = self.now
|
||||||
extra['unique_id'] = self.unique_id
|
extra["unique_id"] = self.unique_id
|
||||||
extra['agent_name'] = self.name
|
extra["agent_name"] = self.name
|
||||||
return self.logger.log(level, message, extra=extra)
|
return self.logger.log(level, message, extra=extra)
|
||||||
|
|
||||||
def debug(self, *args, **kwargs):
|
def debug(self, *args, **kwargs):
|
||||||
@@ -217,198 +245,18 @@ class BaseAgent(MesaAgent, MutableMapping, metaclass=MetaAgent):
|
|||||||
content = dict(self.items(keys=keys))
|
content = dict(self.items(keys=keys))
|
||||||
if pretty and content:
|
if pretty and content:
|
||||||
d = content
|
d = content
|
||||||
content = '\n'
|
content = "\n"
|
||||||
for k, v in d.items():
|
for k, v in d.items():
|
||||||
content += f'- {k}: {v}\n'
|
content += f"- {k}: {v}\n"
|
||||||
content = textwrap.indent(content, ' ')
|
content = textwrap.indent(content, " ")
|
||||||
return f"{repr(self)}{content}"
|
return f"{repr(self)}{content}"
|
||||||
|
|
||||||
def __repr__(self):
|
def __repr__(self):
|
||||||
return f"{self.__class__.__name__}({self.unique_id})"
|
return f"{self.__class__.__name__}({self.unique_id})"
|
||||||
|
|
||||||
|
|
||||||
class NetworkAgent(BaseAgent):
|
|
||||||
|
|
||||||
def __init__(self, *args, topology, node_id, **kwargs):
|
|
||||||
super().__init__(*args, **kwargs)
|
|
||||||
|
|
||||||
self.topology = topology
|
|
||||||
self.node_id = node_id
|
|
||||||
self.G = self.model.topologies[topology]
|
|
||||||
assert self.G
|
|
||||||
|
|
||||||
def count_neighboring_agents(self, state_id=None, **kwargs):
|
|
||||||
return len(self.get_neighboring_agents(state_id=state_id, **kwargs))
|
|
||||||
|
|
||||||
def get_neighboring_agents(self, state_id=None, **kwargs):
|
|
||||||
return self.get_agents(limit_neighbors=True, state_id=state_id, **kwargs)
|
|
||||||
|
|
||||||
def iter_agents(self, unique_id=None, limit_neighbors=False, **kwargs):
|
|
||||||
unique_ids = None
|
|
||||||
if isinstance(unique_id, list):
|
|
||||||
unique_ids = set(unique_id)
|
|
||||||
elif unique_id is not None:
|
|
||||||
unique_ids = set([unique_id,])
|
|
||||||
|
|
||||||
if limit_neighbors:
|
|
||||||
neighbor_ids = set()
|
|
||||||
for node_id in self.G.neighbors(self.node_id):
|
|
||||||
if self.G.nodes[node_id].get('agent_id') is not None:
|
|
||||||
neighbor_ids.add(node_id)
|
|
||||||
if unique_ids:
|
|
||||||
unique_ids = unique_ids & neighbor_ids
|
|
||||||
else:
|
|
||||||
unique_ids = neighbor_ids
|
|
||||||
if not unique_ids:
|
|
||||||
return
|
|
||||||
unique_ids = list(unique_ids)
|
|
||||||
yield from super().iter_agents(unique_id=unique_ids, **kwargs)
|
|
||||||
|
|
||||||
def subgraph(self, center=True, **kwargs):
|
|
||||||
include = [self] if center else []
|
|
||||||
G = self.G.subgraph(n.node_id for n in list(self.get_agents(**kwargs)+include))
|
|
||||||
return G
|
|
||||||
|
|
||||||
def remove_node(self):
|
|
||||||
self.G.remove_node(self.node_id)
|
|
||||||
|
|
||||||
def add_edge(self, other, edge_attr_dict=None, *edge_attrs):
|
|
||||||
if self.node_id not in self.G.nodes(data=False):
|
|
||||||
raise ValueError('{} not in list of existing agents in the network'.format(self.unique_id))
|
|
||||||
if other.node_id not in self.G.nodes(data=False):
|
|
||||||
raise ValueError('{} not in list of existing agents in the network'.format(other))
|
|
||||||
|
|
||||||
self.G.add_edge(self.node_id, other.node_id, edge_attr_dict=edge_attr_dict, *edge_attrs)
|
|
||||||
|
|
||||||
def die(self, remove=True):
|
|
||||||
if remove:
|
|
||||||
self.remove_node()
|
|
||||||
return super().die()
|
|
||||||
|
|
||||||
|
|
||||||
def state(name=None):
|
|
||||||
def decorator(func, name=None):
|
|
||||||
'''
|
|
||||||
A state function should return either a state id, or a tuple (state_id, when)
|
|
||||||
The default value for state_id is the current state id.
|
|
||||||
The default value for when is the interval defined in the environment.
|
|
||||||
'''
|
|
||||||
if inspect.isgeneratorfunction(func):
|
|
||||||
orig_func = func
|
|
||||||
|
|
||||||
@wraps(func)
|
|
||||||
def func(self):
|
|
||||||
while True:
|
|
||||||
if not self._coroutine:
|
|
||||||
self._coroutine = orig_func(self)
|
|
||||||
try:
|
|
||||||
n = next(self._coroutine)
|
|
||||||
if n:
|
|
||||||
return None, n
|
|
||||||
return
|
|
||||||
except StopIteration as ex:
|
|
||||||
self._coroutine = None
|
|
||||||
next_state = ex.value
|
|
||||||
if next_state is not None:
|
|
||||||
self.set_state(next_state)
|
|
||||||
return next_state
|
|
||||||
|
|
||||||
func.id = name or func.__name__
|
|
||||||
func.is_default = False
|
|
||||||
return func
|
|
||||||
|
|
||||||
if callable(name):
|
|
||||||
return decorator(name)
|
|
||||||
else:
|
|
||||||
return partial(decorator, name=name)
|
|
||||||
|
|
||||||
|
|
||||||
def default_state(func):
|
|
||||||
func.is_default = True
|
|
||||||
return func
|
|
||||||
|
|
||||||
|
|
||||||
class MetaFSM(MetaAgent):
|
|
||||||
def __new__(mcls, name, bases, namespace):
|
|
||||||
states = {}
|
|
||||||
# Re-use states from inherited classes
|
|
||||||
default_state = None
|
|
||||||
for i in bases:
|
|
||||||
if isinstance(i, MetaFSM):
|
|
||||||
for state_id, state in i._states.items():
|
|
||||||
if state.is_default:
|
|
||||||
default_state = state
|
|
||||||
states[state_id] = state
|
|
||||||
|
|
||||||
# Add new states
|
|
||||||
for attr, func in namespace.items():
|
|
||||||
if hasattr(func, 'id'):
|
|
||||||
if func.is_default:
|
|
||||||
default_state = func
|
|
||||||
states[func.id] = func
|
|
||||||
|
|
||||||
namespace.update({
|
|
||||||
'_default_state': default_state,
|
|
||||||
'_states': states,
|
|
||||||
})
|
|
||||||
|
|
||||||
return super(MetaFSM, mcls).__new__(mcls=mcls, name=name, bases=bases, namespace=namespace)
|
|
||||||
|
|
||||||
|
|
||||||
class FSM(BaseAgent, metaclass=MetaFSM):
|
|
||||||
def __init__(self, *args, **kwargs):
|
|
||||||
super(FSM, self).__init__(*args, **kwargs)
|
|
||||||
if not hasattr(self, 'state_id'):
|
|
||||||
if not self._default_state:
|
|
||||||
raise ValueError('No default state specified for {}'.format(self.unique_id))
|
|
||||||
self.state_id = self._default_state.id
|
|
||||||
|
|
||||||
self._coroutine = None
|
|
||||||
self.set_state(self.state_id)
|
|
||||||
|
|
||||||
def step(self):
|
|
||||||
self.debug(f'Agent {self.unique_id} @ state {self.state_id}')
|
|
||||||
default_interval = super().step()
|
|
||||||
|
|
||||||
next_state = self._states[self.state_id](self)
|
|
||||||
|
|
||||||
when = None
|
|
||||||
try:
|
|
||||||
next_state, *when = next_state
|
|
||||||
if not when:
|
|
||||||
when = None
|
|
||||||
elif len(when) == 1:
|
|
||||||
when = when[0]
|
|
||||||
else:
|
|
||||||
raise ValueError('Too many values returned. Only state (and time) allowed')
|
|
||||||
except TypeError:
|
|
||||||
pass
|
|
||||||
|
|
||||||
if next_state is not None:
|
|
||||||
self.set_state(next_state)
|
|
||||||
|
|
||||||
return when or default_interval
|
|
||||||
|
|
||||||
def set_state(self, state, when=None):
|
|
||||||
if hasattr(state, 'id'):
|
|
||||||
state = state.id
|
|
||||||
if state not in self._states:
|
|
||||||
raise ValueError('{} is not a valid state'.format(state))
|
|
||||||
self.state_id = state
|
|
||||||
if when is not None:
|
|
||||||
self.model.schedule.add(self, when=when)
|
|
||||||
return state
|
|
||||||
|
|
||||||
def die(self):
|
|
||||||
return self.dead, super().die()
|
|
||||||
|
|
||||||
@state
|
|
||||||
def dead(self):
|
|
||||||
return self.die()
|
|
||||||
|
|
||||||
|
|
||||||
def prob(prob, random):
|
def prob(prob, random):
|
||||||
'''
|
"""
|
||||||
A true/False uniform distribution with a given probability.
|
A true/False uniform distribution with a given probability.
|
||||||
To be used like this:
|
To be used like this:
|
||||||
|
|
||||||
@@ -417,14 +265,13 @@ def prob(prob, random):
|
|||||||
if prob(0.3):
|
if prob(0.3):
|
||||||
do_something()
|
do_something()
|
||||||
|
|
||||||
'''
|
"""
|
||||||
r = random.random()
|
r = random.random()
|
||||||
return r < prob
|
return r < prob
|
||||||
|
|
||||||
|
|
||||||
def calculate_distribution(network_agents=None,
|
def calculate_distribution(network_agents=None, agent_class=None):
|
||||||
agent_class=None):
|
"""
|
||||||
'''
|
|
||||||
Calculate the threshold values (thresholds for a uniform distribution)
|
Calculate the threshold values (thresholds for a uniform distribution)
|
||||||
of an agent distribution given the weights of each agent type.
|
of an agent distribution given the weights of each agent type.
|
||||||
|
|
||||||
@@ -447,168 +294,54 @@ def calculate_distribution(network_agents=None,
|
|||||||
|
|
||||||
In this example, 20% of the nodes will be marked as type
|
In this example, 20% of the nodes will be marked as type
|
||||||
'agent_class_1'.
|
'agent_class_1'.
|
||||||
'''
|
"""
|
||||||
if network_agents:
|
if network_agents:
|
||||||
network_agents = [deepcopy(agent) for agent in network_agents if not hasattr(agent, 'id')]
|
network_agents = [
|
||||||
|
deepcopy(agent) for agent in network_agents if not hasattr(agent, "id")
|
||||||
|
]
|
||||||
elif agent_class:
|
elif agent_class:
|
||||||
network_agents = [{'agent_class': agent_class}]
|
network_agents = [{"agent_class": agent_class}]
|
||||||
else:
|
else:
|
||||||
raise ValueError('Specify a distribution or a default agent type')
|
raise ValueError("Specify a distribution or a default agent type")
|
||||||
|
|
||||||
# Fix missing weights and incompatible types
|
# Fix missing weights and incompatible types
|
||||||
for x in network_agents:
|
for x in network_agents:
|
||||||
x['weight'] = float(x.get('weight', 1))
|
x["weight"] = float(x.get("weight", 1))
|
||||||
|
|
||||||
# Calculate the thresholds
|
# Calculate the thresholds
|
||||||
total = sum(x['weight'] for x in network_agents)
|
total = sum(x["weight"] for x in network_agents)
|
||||||
acc = 0
|
acc = 0
|
||||||
for v in network_agents:
|
for v in network_agents:
|
||||||
if 'ids' in v:
|
if "ids" in v:
|
||||||
continue
|
continue
|
||||||
upper = acc + (v['weight']/total)
|
upper = acc + (v["weight"] / total)
|
||||||
v['threshold'] = [acc, upper]
|
v["threshold"] = [acc, upper]
|
||||||
acc = upper
|
acc = upper
|
||||||
return network_agents
|
return network_agents
|
||||||
|
|
||||||
|
|
||||||
def serialize_type(agent_class, known_modules=[], **kwargs):
|
def _serialize_type(agent_class, known_modules=[], **kwargs):
|
||||||
if isinstance(agent_class, str):
|
if isinstance(agent_class, str):
|
||||||
return agent_class
|
return agent_class
|
||||||
known_modules += ['soil.agents']
|
known_modules += ["soil.agents"]
|
||||||
return serialization.serialize(agent_class, known_modules=known_modules, **kwargs)[1] # Get the name of the class
|
return serialization.serialize(agent_class, known_modules=known_modules, **kwargs)[
|
||||||
|
1
|
||||||
|
] # Get the name of the class
|
||||||
|
|
||||||
|
|
||||||
def serialize_definition(network_agents, known_modules=[]):
|
def _deserialize_type(agent_class, known_modules=[]):
|
||||||
'''
|
|
||||||
When serializing an agent distribution, remove the thresholds, in order
|
|
||||||
to avoid cluttering the YAML definition file.
|
|
||||||
'''
|
|
||||||
d = deepcopy(list(network_agents))
|
|
||||||
for v in d:
|
|
||||||
if 'threshold' in v:
|
|
||||||
del v['threshold']
|
|
||||||
v['agent_class'] = serialize_type(v['agent_class'],
|
|
||||||
known_modules=known_modules)
|
|
||||||
return d
|
|
||||||
|
|
||||||
|
|
||||||
def deserialize_type(agent_class, known_modules=[]):
|
|
||||||
if not isinstance(agent_class, str):
|
if not isinstance(agent_class, str):
|
||||||
return agent_class
|
return agent_class
|
||||||
known = known_modules + ['soil.agents', 'soil.agents.custom' ]
|
known = known_modules + ["soil.agents", "soil.agents.custom"]
|
||||||
agent_class = serialization.deserializer(agent_class, known_modules=known)
|
agent_class = serialization.deserializer(agent_class, known_modules=known)
|
||||||
return agent_class
|
return agent_class
|
||||||
|
|
||||||
|
|
||||||
def deserialize_definition(ind, **kwargs):
|
|
||||||
d = deepcopy(ind)
|
|
||||||
for v in d:
|
|
||||||
v['agent_class'] = deserialize_type(v['agent_class'], **kwargs)
|
|
||||||
return d
|
|
||||||
|
|
||||||
|
|
||||||
def _validate_states(states, topology):
|
|
||||||
'''Validate states to avoid ignoring states during initialization'''
|
|
||||||
states = states or []
|
|
||||||
if isinstance(states, dict):
|
|
||||||
for x in states:
|
|
||||||
assert x in topology.nodes
|
|
||||||
else:
|
|
||||||
assert len(states) <= len(topology)
|
|
||||||
return states
|
|
||||||
|
|
||||||
|
|
||||||
def _convert_agent_classs(ind, to_string=False, **kwargs):
|
|
||||||
'''Convenience method to allow specifying agents by class or class name.'''
|
|
||||||
if to_string:
|
|
||||||
return serialize_definition(ind, **kwargs)
|
|
||||||
return deserialize_definition(ind, **kwargs)
|
|
||||||
|
|
||||||
|
|
||||||
# def _agent_from_definition(definition, random, value=-1, unique_id=None):
|
|
||||||
# """Used in the initialization of agents given an agent distribution."""
|
|
||||||
# if value < 0:
|
|
||||||
# value = random.random()
|
|
||||||
# for d in sorted(definition, key=lambda x: x.get('threshold')):
|
|
||||||
# threshold = d.get('threshold', (-1, -1))
|
|
||||||
# # Check if the definition matches by id (first) or by threshold
|
|
||||||
# if (unique_id is not None and unique_id in d.get('ids', [])) or \
|
|
||||||
# (value >= threshold[0] and value < threshold[1]):
|
|
||||||
# state = {}
|
|
||||||
# if 'state' in d:
|
|
||||||
# state = deepcopy(d['state'])
|
|
||||||
# return d['agent_class'], state
|
|
||||||
|
|
||||||
# raise Exception('Definition for value {} not found in: {}'.format(value, definition))
|
|
||||||
|
|
||||||
|
|
||||||
# def _definition_to_dict(definition, random, size=None, default_state=None):
|
|
||||||
# state = default_state or {}
|
|
||||||
# agents = {}
|
|
||||||
# remaining = {}
|
|
||||||
# if size:
|
|
||||||
# for ix in range(size):
|
|
||||||
# remaining[ix] = copy(state)
|
|
||||||
# else:
|
|
||||||
# remaining = defaultdict(lambda x: copy(state))
|
|
||||||
|
|
||||||
# distro = sorted([item for item in definition if 'weight' in item])
|
|
||||||
|
|
||||||
# id = 0
|
|
||||||
|
|
||||||
# def init_agent(item, id=ix):
|
|
||||||
# while id in agents:
|
|
||||||
# id += 1
|
|
||||||
|
|
||||||
# agent = remaining[id]
|
|
||||||
# agent['state'].update(copy(item.get('state', {})))
|
|
||||||
# agents[agent.unique_id] = agent
|
|
||||||
# del remaining[id]
|
|
||||||
# return agent
|
|
||||||
|
|
||||||
# for item in definition:
|
|
||||||
# if 'ids' in item:
|
|
||||||
# ids = item['ids']
|
|
||||||
# del item['ids']
|
|
||||||
# for id in ids:
|
|
||||||
# agent = init_agent(item, id)
|
|
||||||
|
|
||||||
# for item in definition:
|
|
||||||
# if 'number' in item:
|
|
||||||
# times = item['number']
|
|
||||||
# del item['number']
|
|
||||||
# for times in range(times):
|
|
||||||
# if size:
|
|
||||||
# ix = random.choice(remaining.keys())
|
|
||||||
# agent = init_agent(item, id)
|
|
||||||
# else:
|
|
||||||
# agent = init_agent(item)
|
|
||||||
# if not size:
|
|
||||||
# return agents
|
|
||||||
|
|
||||||
# if len(remaining) < 0:
|
|
||||||
# raise Exception('Invalid definition. Too many agents to add')
|
|
||||||
|
|
||||||
|
|
||||||
# total_weight = float(sum(s['weight'] for s in distro))
|
|
||||||
# unit = size / total_weight
|
|
||||||
|
|
||||||
# for item in distro:
|
|
||||||
# times = unit * item['weight']
|
|
||||||
# del item['weight']
|
|
||||||
# for times in range(times):
|
|
||||||
# ix = random.choice(remaining.keys())
|
|
||||||
# agent = init_agent(item, id)
|
|
||||||
# return agents
|
|
||||||
|
|
||||||
|
|
||||||
class AgentView(Mapping, Set):
|
class AgentView(Mapping, Set):
|
||||||
"""A lazy-loaded list of agents.
|
"""A lazy-loaded list of agents."""
|
||||||
"""
|
|
||||||
|
|
||||||
__slots__ = ("_agents",)
|
__slots__ = ("_agents",)
|
||||||
|
|
||||||
|
|
||||||
def __init__(self, agents):
|
def __init__(self, agents):
|
||||||
self._agents = agents
|
self._agents = agents
|
||||||
|
|
||||||
@@ -651,11 +384,20 @@ class AgentView(Mapping, Set):
|
|||||||
return f"{self.__class__.__name__}({self})"
|
return f"{self.__class__.__name__}({self})"
|
||||||
|
|
||||||
|
|
||||||
def filter_agents(agents, *id_args, unique_id=None, state_id=None, agent_class=None, ignore=None, state=None,
|
def filter_agents(
|
||||||
limit=None, **kwargs):
|
agents,
|
||||||
'''
|
*id_args,
|
||||||
|
unique_id=None,
|
||||||
|
state_id=None,
|
||||||
|
agent_class=None,
|
||||||
|
ignore=None,
|
||||||
|
state=None,
|
||||||
|
limit=None,
|
||||||
|
**kwargs,
|
||||||
|
):
|
||||||
|
"""
|
||||||
Filter agents given as a dict, by the criteria given as arguments (e.g., certain type or state id).
|
Filter agents given as a dict, by the criteria given as arguments (e.g., certain type or state id).
|
||||||
'''
|
"""
|
||||||
assert isinstance(agents, dict)
|
assert isinstance(agents, dict)
|
||||||
|
|
||||||
ids = []
|
ids = []
|
||||||
@@ -678,7 +420,7 @@ def filter_agents(agents, *id_args, unique_id=None, state_id=None, agent_class=N
|
|||||||
state_id = tuple([state_id])
|
state_id = tuple([state_id])
|
||||||
|
|
||||||
if agent_class is not None:
|
if agent_class is not None:
|
||||||
agent_class = deserialize_type(agent_class)
|
agent_class = _deserialize_type(agent_class)
|
||||||
try:
|
try:
|
||||||
agent_class = tuple(agent_class)
|
agent_class = tuple(agent_class)
|
||||||
except TypeError:
|
except TypeError:
|
||||||
@@ -688,7 +430,7 @@ def filter_agents(agents, *id_args, unique_id=None, state_id=None, agent_class=N
|
|||||||
f = filter(lambda x: x not in ignore, f)
|
f = filter(lambda x: x not in ignore, f)
|
||||||
|
|
||||||
if state_id is not None:
|
if state_id is not None:
|
||||||
f = filter(lambda agent: agent.get('state_id', None) in state_id, f)
|
f = filter(lambda agent: agent.get("state_id", None) in state_id, f)
|
||||||
|
|
||||||
if agent_class is not None:
|
if agent_class is not None:
|
||||||
f = filter(lambda agent: isinstance(agent, agent_class), f)
|
f = filter(lambda agent: isinstance(agent, agent_class), f)
|
||||||
@@ -697,7 +439,7 @@ def filter_agents(agents, *id_args, unique_id=None, state_id=None, agent_class=N
|
|||||||
state.update(kwargs)
|
state.update(kwargs)
|
||||||
|
|
||||||
for k, v in state.items():
|
for k, v in state.items():
|
||||||
f = filter(lambda agent: agent.state.get(k, None) == v, f)
|
f = filter(lambda agent: getattr(agent, k, None) == v, f)
|
||||||
|
|
||||||
if limit is not None:
|
if limit is not None:
|
||||||
f = islice(f, limit)
|
f = islice(f, limit)
|
||||||
@@ -705,123 +447,135 @@ def filter_agents(agents, *id_args, unique_id=None, state_id=None, agent_class=N
|
|||||||
yield from f
|
yield from f
|
||||||
|
|
||||||
|
|
||||||
def from_config(cfg: config.AgentConfig, random, topologies: Dict[str, nx.Graph] = None) -> List[Dict[str, Any]]:
|
def from_config(
|
||||||
'''
|
cfg: config.AgentConfig, random, topology: nx.Graph = None
|
||||||
|
) -> List[Dict[str, Any]]:
|
||||||
|
"""
|
||||||
This function turns an agentconfig into a list of individual "agent specifications", which are just a dictionary
|
This function turns an agentconfig into a list of individual "agent specifications", which are just a dictionary
|
||||||
with the parameters that the environment will use to construct each agent.
|
with the parameters that the environment will use to construct each agent.
|
||||||
|
|
||||||
This function does NOT return a list of agents, mostly because some attributes to the agent are not known at the
|
This function does NOT return a list of agents, mostly because some attributes to the agent are not known at the
|
||||||
time of calling this function, such as `unique_id`.
|
time of calling this function, such as `unique_id`.
|
||||||
'''
|
"""
|
||||||
default = cfg or config.AgentConfig()
|
default = cfg or config.AgentConfig()
|
||||||
if not isinstance(cfg, config.AgentConfig):
|
if not isinstance(cfg, config.AgentConfig):
|
||||||
cfg = config.AgentConfig(**cfg)
|
cfg = config.AgentConfig(**cfg)
|
||||||
return _agents_from_config(cfg, topologies=topologies, random=random)
|
|
||||||
|
|
||||||
|
|
||||||
def _agents_from_config(cfg: config.AgentConfig,
|
|
||||||
topologies: Dict[str, nx.Graph],
|
|
||||||
random) -> List[Dict[str, Any]]:
|
|
||||||
if cfg and not isinstance(cfg, config.AgentConfig):
|
|
||||||
cfg = config.AgentConfig(**cfg)
|
|
||||||
|
|
||||||
agents = []
|
agents = []
|
||||||
|
|
||||||
assigned = defaultdict(int)
|
assigned_total = 0
|
||||||
|
assigned_network = 0
|
||||||
|
|
||||||
if cfg.fixed is not None:
|
if cfg.fixed is not None:
|
||||||
agents, counts = _from_fixed(cfg.fixed, topology=cfg.topology, default=cfg)
|
agents, assigned_total, assigned_network = _from_fixed(
|
||||||
assigned.update(counts)
|
cfg.fixed, topology=cfg.topology, default=cfg
|
||||||
|
)
|
||||||
|
|
||||||
n = cfg.n
|
n = cfg.n
|
||||||
|
|
||||||
if cfg.distribution:
|
if cfg.distribution:
|
||||||
topo_size = {top: len(topologies[top]) for top in topologies}
|
topo_size = len(topology) if topology else 0
|
||||||
|
|
||||||
grouped = defaultdict(list)
|
networked = []
|
||||||
total = []
|
total = []
|
||||||
|
|
||||||
for d in cfg.distribution:
|
for d in cfg.distribution:
|
||||||
if d.strategy == config.Strategy.topology:
|
if d.strategy == config.Strategy.topology:
|
||||||
topology = d.topology if ('topology' in d.__fields_set__) else cfg.topology
|
topo = d.topology if ("topology" in d.__fields_set__) else cfg.topology
|
||||||
if not topology:
|
if not topo:
|
||||||
raise ValueError('The "topology" strategy only works if the topology parameter is specified')
|
raise ValueError(
|
||||||
if topology not in topo_size:
|
'The "topology" strategy only works if the topology parameter is set to True'
|
||||||
raise ValueError(f'Unknown topology selected: { topology }. Make sure the topology has been defined')
|
)
|
||||||
|
if not topo_size:
|
||||||
|
raise ValueError(
|
||||||
|
f"Topology does not have enough free nodes to assign one to the agent"
|
||||||
|
)
|
||||||
|
|
||||||
grouped[topology].append(d)
|
networked.append(d)
|
||||||
|
|
||||||
if d.strategy == config.Strategy.total:
|
if d.strategy == config.Strategy.total:
|
||||||
if not cfg.n:
|
if not cfg.n:
|
||||||
raise ValueError('Cannot use the "total" strategy without providing the total number of agents')
|
raise ValueError(
|
||||||
|
'Cannot use the "total" strategy without providing the total number of agents'
|
||||||
|
)
|
||||||
total.append(d)
|
total.append(d)
|
||||||
|
|
||||||
|
if networked:
|
||||||
for (topo, distro) in grouped.items():
|
new_agents = _from_distro(
|
||||||
if not topologies or topo not in topo_size:
|
networked,
|
||||||
raise ValueError(
|
n=topo_size - assigned_network,
|
||||||
'You need to specify a target number of agents for the distribution \
|
|
||||||
or a configuration with a topology, along with a dictionary with \
|
|
||||||
all the available topologies')
|
|
||||||
n = len(topologies[topo])
|
|
||||||
target = topo_size[topo] - assigned[topo]
|
|
||||||
new_agents = _from_distro(cfg.distribution, target,
|
|
||||||
topology=topo,
|
topology=topo,
|
||||||
default=cfg,
|
default=cfg,
|
||||||
random=random)
|
random=random,
|
||||||
assigned[topo] += len(new_agents)
|
)
|
||||||
|
assigned_total += len(new_agents)
|
||||||
|
assigned_network += len(new_agents)
|
||||||
agents += new_agents
|
agents += new_agents
|
||||||
|
|
||||||
if total:
|
if total:
|
||||||
remaining = n - sum(assigned.values())
|
remaining = n - assigned_total
|
||||||
agents += _from_distro(total, remaining,
|
agents += _from_distro(total, n=remaining, default=cfg, random=random)
|
||||||
topology='', # DO NOT assign to any topology
|
|
||||||
default=cfg,
|
|
||||||
random=random)
|
|
||||||
|
|
||||||
|
if assigned_network < topo_size:
|
||||||
if sum(assigned.values()) != sum(topo_size.values()):
|
utils.logger.warn(
|
||||||
utils.logger.warn(f'The total number of agents does not match the total number of nodes in '
|
f"The total number of agents does not match the total number of nodes in "
|
||||||
'every topology. This may be due to a definition error: assigned: '
|
"every topology. This may be due to a definition error: assigned: "
|
||||||
f'{ assigned } total sizes: { topo_size }')
|
f"{ assigned } total size: { topo_size }"
|
||||||
|
)
|
||||||
|
|
||||||
return agents
|
return agents
|
||||||
|
|
||||||
|
|
||||||
def _from_fixed(lst: List[config.FixedAgentConfig], topology: str, default: config.SingleAgentConfig) -> List[Dict[str, Any]]:
|
def _from_fixed(
|
||||||
|
lst: List[config.FixedAgentConfig],
|
||||||
|
topology: bool,
|
||||||
|
default: config.SingleAgentConfig,
|
||||||
|
) -> List[Dict[str, Any]]:
|
||||||
agents = []
|
agents = []
|
||||||
|
|
||||||
counts = {}
|
counts_total = 0
|
||||||
|
counts_network = 0
|
||||||
|
|
||||||
for fixed in lst:
|
for fixed in lst:
|
||||||
agent = {}
|
agent = {}
|
||||||
if default:
|
if default:
|
||||||
agent = default.state.copy()
|
agent = default.state.copy()
|
||||||
agent.update(fixed.state)
|
agent.update(fixed.state)
|
||||||
cls = serialization.deserialize(fixed.agent_class or (default and default.agent_class))
|
cls = serialization.deserialize(
|
||||||
agent['agent_class'] = cls
|
fixed.agent_class or (default and default.agent_class)
|
||||||
topo = fixed.topology if ('topology' in fixed.__fields_set__) else topology or default.topology
|
)
|
||||||
|
agent["agent_class"] = cls
|
||||||
|
topo = (
|
||||||
|
fixed.topology
|
||||||
|
if ("topology" in fixed.__fields_set__)
|
||||||
|
else topology or default.topology
|
||||||
|
)
|
||||||
|
|
||||||
if topo:
|
if topo:
|
||||||
agent['topology'] = topo
|
agent["topology"] = True
|
||||||
|
counts_network += 1
|
||||||
if not fixed.hidden:
|
if not fixed.hidden:
|
||||||
counts[topo] = counts.get(topo, 0) + 1
|
counts_total += 1
|
||||||
agents.append(agent)
|
agents.append(agent)
|
||||||
|
|
||||||
return agents, counts
|
return agents, counts_total, counts_network
|
||||||
|
|
||||||
|
|
||||||
def _from_distro(distro: List[config.AgentDistro],
|
def _from_distro(
|
||||||
|
distro: List[config.AgentDistro],
|
||||||
n: int,
|
n: int,
|
||||||
topology: str,
|
topology: str,
|
||||||
default: config.SingleAgentConfig,
|
default: config.SingleAgentConfig,
|
||||||
random) -> List[Dict[str, Any]]:
|
random,
|
||||||
|
) -> List[Dict[str, Any]]:
|
||||||
|
|
||||||
agents = []
|
agents = []
|
||||||
|
|
||||||
if n is None:
|
if n is None:
|
||||||
if any(lambda dist: dist.n is None, distro):
|
if any(lambda dist: dist.n is None, distro):
|
||||||
raise ValueError('You must provide a total number of agents, or the number of each type')
|
raise ValueError(
|
||||||
|
"You must provide a total number of agents, or the number of each type"
|
||||||
|
)
|
||||||
n = sum(dist.n for dist in distro)
|
n = sum(dist.n for dist in distro)
|
||||||
|
|
||||||
weights = list(dist.weight if dist.weight is not None else 1 for dist in distro)
|
weights = list(dist.weight if dist.weight is not None else 1 for dist in distro)
|
||||||
@@ -834,35 +588,48 @@ def _from_distro(distro: List[config.AgentDistro],
|
|||||||
# So instead we calculate our own distribution to make sure the actual ratios are close to what we would expect
|
# So instead we calculate our own distribution to make sure the actual ratios are close to what we would expect
|
||||||
|
|
||||||
# Calculate how many times each has to appear
|
# Calculate how many times each has to appear
|
||||||
indices = list(chain.from_iterable([idx] * int(n*chunk) for (idx, n) in enumerate(norm)))
|
indices = list(
|
||||||
|
chain.from_iterable([idx] * int(n * chunk) for (idx, n) in enumerate(norm))
|
||||||
|
)
|
||||||
|
|
||||||
# Complete with random agents following the original weight distribution
|
# Complete with random agents following the original weight distribution
|
||||||
if len(indices) < n:
|
if len(indices) < n:
|
||||||
indices += random.choices(list(range(len(distro))), weights=[d.weight for d in distro], k=n-len(indices))
|
indices += random.choices(
|
||||||
|
list(range(len(distro))),
|
||||||
|
weights=[d.weight for d in distro],
|
||||||
|
k=n - len(indices),
|
||||||
|
)
|
||||||
|
|
||||||
# Deserialize classes for efficiency
|
# Deserialize classes for efficiency
|
||||||
classes = list(serialization.deserialize(i.agent_class or default.agent_class) for i in distro)
|
classes = list(
|
||||||
|
serialization.deserialize(i.agent_class or default.agent_class) for i in distro
|
||||||
|
)
|
||||||
|
|
||||||
# Add them in random order
|
# Add them in random order
|
||||||
random.shuffle(indices)
|
random.shuffle(indices)
|
||||||
|
|
||||||
|
|
||||||
for idx in indices:
|
for idx in indices:
|
||||||
d = distro[idx]
|
d = distro[idx]
|
||||||
agent = d.state.copy()
|
agent = d.state.copy()
|
||||||
cls = classes[idx]
|
cls = classes[idx]
|
||||||
agent['agent_class'] = cls
|
agent["agent_class"] = cls
|
||||||
if default:
|
if default:
|
||||||
agent.update(default.state)
|
agent.update(default.state)
|
||||||
# agent = cls(unique_id=agent_id, model=env, **state)
|
topology = (
|
||||||
topology = d.topology if ('topology' in d.__fields_set__) else topology or default.topology
|
d.topology
|
||||||
|
if ("topology" in d.__fields_set__)
|
||||||
|
else topology or default.topology
|
||||||
|
)
|
||||||
if topology:
|
if topology:
|
||||||
agent['topology'] = topology
|
agent["topology"] = topology
|
||||||
agents.append(agent)
|
agents.append(agent)
|
||||||
|
|
||||||
return agents
|
return agents
|
||||||
|
|
||||||
|
|
||||||
|
from .network_agents import *
|
||||||
|
from .fsm import *
|
||||||
|
from .evented import *
|
||||||
from .BassModel import *
|
from .BassModel import *
|
||||||
from .BigMarketModel import *
|
from .BigMarketModel import *
|
||||||
from .IndependentCascadeModel import *
|
from .IndependentCascadeModel import *
|
||||||
@@ -876,4 +643,5 @@ try:
|
|||||||
from .Geo import Geo
|
from .Geo import Geo
|
||||||
except ImportError:
|
except ImportError:
|
||||||
import sys
|
import sys
|
||||||
print('Could not load the Geo Agent, scipy is not installed', file=sys.stderr)
|
|
||||||
|
print("Could not load the Geo Agent, scipy is not installed", file=sys.stderr)
|
||||||
|
|||||||
57
soil/agents/evented.py
Normal file
57
soil/agents/evented.py
Normal file
@@ -0,0 +1,57 @@
|
|||||||
|
from . import BaseAgent
|
||||||
|
from ..events import Message, Tell, Ask, Reply, TimedOut
|
||||||
|
from ..time import Cond
|
||||||
|
from functools import partial
|
||||||
|
from collections import deque
|
||||||
|
|
||||||
|
|
||||||
|
class Evented(BaseAgent):
|
||||||
|
def __init__(self, *args, **kwargs):
|
||||||
|
super().__init__(*args, **kwargs)
|
||||||
|
self._inbox = deque()
|
||||||
|
self._received = 0
|
||||||
|
self._processed = 0
|
||||||
|
|
||||||
|
|
||||||
|
def on_receive(self, *args, **kwargs):
|
||||||
|
pass
|
||||||
|
|
||||||
|
def received(self, expiration=None, timeout=None):
|
||||||
|
current = self._received
|
||||||
|
if expiration is None:
|
||||||
|
expiration = float('inf') if timeout is None else self.now + timeout
|
||||||
|
|
||||||
|
if expiration < self.now:
|
||||||
|
raise ValueError("Invalid expiration time")
|
||||||
|
|
||||||
|
def ready(agent):
|
||||||
|
return agent._received > current or agent.now >= expiration
|
||||||
|
|
||||||
|
def value(agent):
|
||||||
|
if agent.now > expiration:
|
||||||
|
raise TimedOut("No message received")
|
||||||
|
|
||||||
|
c = Cond(func=ready, return_func=value)
|
||||||
|
c._checked = True
|
||||||
|
return c
|
||||||
|
|
||||||
|
def tell(self, msg, sender):
|
||||||
|
self._received += 1
|
||||||
|
self._inbox.append(Tell(payload=msg, sender=sender))
|
||||||
|
|
||||||
|
def ask(self, msg, timeout=None):
|
||||||
|
self._received += 1
|
||||||
|
ask = Ask(payload=msg)
|
||||||
|
self._inbox.append(ask)
|
||||||
|
expiration = float('inf') if timeout is None else self.now + timeout
|
||||||
|
return ask.replied(expiration=expiration)
|
||||||
|
|
||||||
|
def check_messages(self):
|
||||||
|
while self._inbox:
|
||||||
|
msg = self._inbox.popleft()
|
||||||
|
self._processed += 1
|
||||||
|
if msg.expired(self.now):
|
||||||
|
continue
|
||||||
|
reply = self.on_receive(msg.payload, sender=msg.sender)
|
||||||
|
if isinstance(msg, Ask):
|
||||||
|
msg.reply = reply
|
||||||
142
soil/agents/fsm.py
Normal file
142
soil/agents/fsm.py
Normal file
@@ -0,0 +1,142 @@
|
|||||||
|
from . import MetaAgent, BaseAgent
|
||||||
|
|
||||||
|
from functools import partial, wraps
|
||||||
|
import inspect
|
||||||
|
|
||||||
|
|
||||||
|
def state(name=None):
|
||||||
|
def decorator(func, name=None):
|
||||||
|
"""
|
||||||
|
A state function should return either a state id, or a tuple (state_id, when)
|
||||||
|
The default value for state_id is the current state id.
|
||||||
|
The default value for when is the interval defined in the environment.
|
||||||
|
"""
|
||||||
|
if inspect.isgeneratorfunction(func):
|
||||||
|
orig_func = func
|
||||||
|
|
||||||
|
@wraps(func)
|
||||||
|
def func(self):
|
||||||
|
while True:
|
||||||
|
if not self._coroutine:
|
||||||
|
self._coroutine = orig_func(self)
|
||||||
|
|
||||||
|
try:
|
||||||
|
if self._last_except:
|
||||||
|
n = self._coroutine.throw(self._last_except)
|
||||||
|
else:
|
||||||
|
n = self._coroutine.send(self._last_return)
|
||||||
|
if n:
|
||||||
|
return None, n
|
||||||
|
return n
|
||||||
|
except StopIteration as ex:
|
||||||
|
self._coroutine = None
|
||||||
|
next_state = ex.value
|
||||||
|
if next_state is not None:
|
||||||
|
self._set_state(next_state)
|
||||||
|
return next_state
|
||||||
|
finally:
|
||||||
|
self._last_return = None
|
||||||
|
self._last_except = None
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
func.id = name or func.__name__
|
||||||
|
func.is_default = False
|
||||||
|
return func
|
||||||
|
|
||||||
|
if callable(name):
|
||||||
|
return decorator(name)
|
||||||
|
else:
|
||||||
|
return partial(decorator, name=name)
|
||||||
|
|
||||||
|
|
||||||
|
def default_state(func):
|
||||||
|
func.is_default = True
|
||||||
|
return func
|
||||||
|
|
||||||
|
|
||||||
|
class MetaFSM(MetaAgent):
|
||||||
|
def __new__(mcls, name, bases, namespace):
|
||||||
|
states = {}
|
||||||
|
# Re-use states from inherited classes
|
||||||
|
default_state = None
|
||||||
|
for i in bases:
|
||||||
|
if isinstance(i, MetaFSM):
|
||||||
|
for state_id, state in i._states.items():
|
||||||
|
if state.is_default:
|
||||||
|
default_state = state
|
||||||
|
states[state_id] = state
|
||||||
|
|
||||||
|
# Add new states
|
||||||
|
for attr, func in namespace.items():
|
||||||
|
if hasattr(func, "id"):
|
||||||
|
if func.is_default:
|
||||||
|
default_state = func
|
||||||
|
states[func.id] = func
|
||||||
|
|
||||||
|
namespace.update(
|
||||||
|
{
|
||||||
|
"_default_state": default_state,
|
||||||
|
"_states": states,
|
||||||
|
}
|
||||||
|
)
|
||||||
|
|
||||||
|
return super(MetaFSM, mcls).__new__(
|
||||||
|
mcls=mcls, name=name, bases=bases, namespace=namespace
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
class FSM(BaseAgent, metaclass=MetaFSM):
|
||||||
|
def __init__(self, **kwargs):
|
||||||
|
super(FSM, self).__init__(**kwargs)
|
||||||
|
if not hasattr(self, "state_id"):
|
||||||
|
if not self._default_state:
|
||||||
|
raise ValueError(
|
||||||
|
"No default state specified for {}".format(self.unique_id)
|
||||||
|
)
|
||||||
|
self.state_id = self._default_state.id
|
||||||
|
|
||||||
|
self._coroutine = None
|
||||||
|
self._set_state(self.state_id)
|
||||||
|
|
||||||
|
def step(self):
|
||||||
|
self.debug(f"Agent {self.unique_id} @ state {self.state_id}")
|
||||||
|
default_interval = super().step()
|
||||||
|
|
||||||
|
next_state = self._states[self.state_id](self)
|
||||||
|
|
||||||
|
when = None
|
||||||
|
try:
|
||||||
|
next_state, *when = next_state
|
||||||
|
if not when:
|
||||||
|
when = None
|
||||||
|
elif len(when) == 1:
|
||||||
|
when = when[0]
|
||||||
|
else:
|
||||||
|
raise ValueError(
|
||||||
|
"Too many values returned. Only state (and time) allowed"
|
||||||
|
)
|
||||||
|
except TypeError:
|
||||||
|
pass
|
||||||
|
|
||||||
|
if next_state is not None:
|
||||||
|
self._set_state(next_state)
|
||||||
|
|
||||||
|
return when or default_interval
|
||||||
|
|
||||||
|
def _set_state(self, state, when=None):
|
||||||
|
if hasattr(state, "id"):
|
||||||
|
state = state.id
|
||||||
|
if state not in self._states:
|
||||||
|
raise ValueError("{} is not a valid state".format(state))
|
||||||
|
self.state_id = state
|
||||||
|
if when is not None:
|
||||||
|
self.model.schedule.add(self, when=when)
|
||||||
|
return state
|
||||||
|
|
||||||
|
def die(self):
|
||||||
|
return self.dead, super().die()
|
||||||
|
|
||||||
|
@state
|
||||||
|
def dead(self):
|
||||||
|
return self.die()
|
||||||
82
soil/agents/network_agents.py
Normal file
82
soil/agents/network_agents.py
Normal file
@@ -0,0 +1,82 @@
|
|||||||
|
from . import BaseAgent
|
||||||
|
|
||||||
|
|
||||||
|
class NetworkAgent(BaseAgent):
|
||||||
|
def __init__(self, *args, topology, node_id, **kwargs):
|
||||||
|
super().__init__(*args, **kwargs)
|
||||||
|
|
||||||
|
assert topology is not None
|
||||||
|
assert node_id is not None
|
||||||
|
self.G = topology
|
||||||
|
assert self.G
|
||||||
|
self.node_id = node_id
|
||||||
|
|
||||||
|
def count_neighbors(self, state_id=None, **kwargs):
|
||||||
|
return len(self.get_neighbors(state_id=state_id, **kwargs))
|
||||||
|
|
||||||
|
def get_neighbors(self, **kwargs):
|
||||||
|
return list(self.iter_agents(limit_neighbors=True, **kwargs))
|
||||||
|
|
||||||
|
@property
|
||||||
|
def node(self):
|
||||||
|
return self.G.nodes[self.node_id]
|
||||||
|
|
||||||
|
def iter_agents(self, unique_id=None, *, limit_neighbors=False, **kwargs):
|
||||||
|
unique_ids = None
|
||||||
|
if isinstance(unique_id, list):
|
||||||
|
unique_ids = set(unique_id)
|
||||||
|
elif unique_id is not None:
|
||||||
|
unique_ids = set(
|
||||||
|
[
|
||||||
|
unique_id,
|
||||||
|
]
|
||||||
|
)
|
||||||
|
|
||||||
|
if limit_neighbors:
|
||||||
|
neighbor_ids = set()
|
||||||
|
for node_id in self.G.neighbors(self.node_id):
|
||||||
|
if self.G.nodes[node_id].get("agent") is not None:
|
||||||
|
neighbor_ids.add(node_id)
|
||||||
|
if unique_ids:
|
||||||
|
unique_ids = unique_ids & neighbor_ids
|
||||||
|
else:
|
||||||
|
unique_ids = neighbor_ids
|
||||||
|
if not unique_ids:
|
||||||
|
return
|
||||||
|
unique_ids = list(unique_ids)
|
||||||
|
yield from super().iter_agents(unique_id=unique_ids, **kwargs)
|
||||||
|
|
||||||
|
def subgraph(self, center=True, **kwargs):
|
||||||
|
include = [self] if center else []
|
||||||
|
G = self.G.subgraph(
|
||||||
|
n.node_id for n in list(self.get_agents(**kwargs) + include)
|
||||||
|
)
|
||||||
|
return G
|
||||||
|
|
||||||
|
def remove_node(self):
|
||||||
|
print(f"Removing node for {self.unique_id}: {self.node_id}")
|
||||||
|
self.G.remove_node(self.node_id)
|
||||||
|
self.node_id = None
|
||||||
|
|
||||||
|
def add_edge(self, other, edge_attr_dict=None, *edge_attrs):
|
||||||
|
if self.node_id not in self.G.nodes(data=False):
|
||||||
|
raise ValueError(
|
||||||
|
"{} not in list of existing agents in the network".format(
|
||||||
|
self.unique_id
|
||||||
|
)
|
||||||
|
)
|
||||||
|
if other.node_id not in self.G.nodes(data=False):
|
||||||
|
raise ValueError(
|
||||||
|
"{} not in list of existing agents in the network".format(other)
|
||||||
|
)
|
||||||
|
|
||||||
|
self.G.add_edge(
|
||||||
|
self.node_id, other.node_id, edge_attr_dict=edge_attr_dict, *edge_attrs
|
||||||
|
)
|
||||||
|
|
||||||
|
def die(self, remove=True):
|
||||||
|
if not self.alive:
|
||||||
|
return None
|
||||||
|
if remove:
|
||||||
|
self.remove_node()
|
||||||
|
return super().die()
|
||||||
170
soil/config.py
170
soil/config.py
@@ -19,6 +19,7 @@ import networkx as nx
|
|||||||
# Could use TypeAlias in python >= 3.10
|
# Could use TypeAlias in python >= 3.10
|
||||||
nodeId = int
|
nodeId = int
|
||||||
|
|
||||||
|
|
||||||
class Node(BaseModel):
|
class Node(BaseModel):
|
||||||
id: nodeId
|
id: nodeId
|
||||||
state: Optional[Dict[str, Any]] = {}
|
state: Optional[Dict[str, Any]] = {}
|
||||||
@@ -43,7 +44,7 @@ class NetParams(BaseModel, extra=Extra.allow):
|
|||||||
|
|
||||||
class NetConfig(BaseModel):
|
class NetConfig(BaseModel):
|
||||||
params: Optional[NetParams]
|
params: Optional[NetParams]
|
||||||
topology: Optional[Union[Topology, nx.Graph]]
|
fixed: Optional[Union[Topology, nx.Graph]]
|
||||||
path: Optional[str]
|
path: Optional[str]
|
||||||
|
|
||||||
class Config:
|
class Config:
|
||||||
@@ -55,13 +56,14 @@ class NetConfig(BaseModel):
|
|||||||
|
|
||||||
@root_validator
|
@root_validator
|
||||||
def validate_all(cls, values):
|
def validate_all(cls, values):
|
||||||
if 'params' not in values and 'topology' not in values:
|
if "params" not in values and "topology" not in values:
|
||||||
raise ValueError('You must specify either a topology or the parameters to generate a graph')
|
raise ValueError(
|
||||||
|
"You must specify either a topology or the parameters to generate a graph"
|
||||||
|
)
|
||||||
return values
|
return values
|
||||||
|
|
||||||
|
|
||||||
class EnvConfig(BaseModel):
|
class EnvConfig(BaseModel):
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
def default():
|
def default():
|
||||||
return EnvConfig()
|
return EnvConfig()
|
||||||
@@ -70,7 +72,7 @@ class EnvConfig(BaseModel):
|
|||||||
class SingleAgentConfig(BaseModel):
|
class SingleAgentConfig(BaseModel):
|
||||||
agent_class: Optional[Union[Type, str]] = None
|
agent_class: Optional[Union[Type, str]] = None
|
||||||
unique_id: Optional[int] = None
|
unique_id: Optional[int] = None
|
||||||
topology: Optional[str] = None
|
topology: Optional[bool] = False
|
||||||
node_id: Optional[Union[int, str]] = None
|
node_id: Optional[Union[int, str]] = None
|
||||||
state: Optional[Dict[str, Any]] = {}
|
state: Optional[Dict[str, Any]] = {}
|
||||||
|
|
||||||
@@ -81,8 +83,10 @@ class FixedAgentConfig(SingleAgentConfig):
|
|||||||
|
|
||||||
@root_validator
|
@root_validator
|
||||||
def validate_all(cls, values):
|
def validate_all(cls, values):
|
||||||
if values.get('agent_id', None) is not None and values.get('n', 1) > 1:
|
if values.get("unique_id", None) is not None and values.get("n", 1) > 1:
|
||||||
raise ValueError(f"An agent_id can only be provided when there is only one agent ({values.get('n')} given)")
|
raise ValueError(
|
||||||
|
f"An unique_id can only be provided when there is only one agent ({values.get('n')} given)"
|
||||||
|
)
|
||||||
return values
|
return values
|
||||||
|
|
||||||
|
|
||||||
@@ -91,8 +95,8 @@ class OverrideAgentConfig(FixedAgentConfig):
|
|||||||
|
|
||||||
|
|
||||||
class Strategy(Enum):
|
class Strategy(Enum):
|
||||||
topology = 'topology'
|
topology = "topology"
|
||||||
total = 'total'
|
total = "total"
|
||||||
|
|
||||||
|
|
||||||
class AgentDistro(SingleAgentConfig):
|
class AgentDistro(SingleAgentConfig):
|
||||||
@@ -102,7 +106,6 @@ class AgentDistro(SingleAgentConfig):
|
|||||||
|
|
||||||
class AgentConfig(SingleAgentConfig):
|
class AgentConfig(SingleAgentConfig):
|
||||||
n: Optional[int] = None
|
n: Optional[int] = None
|
||||||
topology: Optional[str]
|
|
||||||
distribution: Optional[List[AgentDistro]] = None
|
distribution: Optional[List[AgentDistro]] = None
|
||||||
fixed: Optional[List[FixedAgentConfig]] = None
|
fixed: Optional[List[FixedAgentConfig]] = None
|
||||||
override: Optional[List[OverrideAgentConfig]] = None
|
override: Optional[List[OverrideAgentConfig]] = None
|
||||||
@@ -113,15 +116,19 @@ class AgentConfig(SingleAgentConfig):
|
|||||||
|
|
||||||
@root_validator
|
@root_validator
|
||||||
def validate_all(cls, values):
|
def validate_all(cls, values):
|
||||||
if 'distribution' in values and ('n' not in values and 'topology' not in values):
|
if "distribution" in values and (
|
||||||
raise ValueError("You need to provide the number of agents or a topology to extract the value from.")
|
"n" not in values and "topology" not in values
|
||||||
|
):
|
||||||
|
raise ValueError(
|
||||||
|
"You need to provide the number of agents or a topology to extract the value from."
|
||||||
|
)
|
||||||
return values
|
return values
|
||||||
|
|
||||||
|
|
||||||
class Config(BaseModel, extra=Extra.allow):
|
class Config(BaseModel, extra=Extra.allow):
|
||||||
version: Optional[str] = '1'
|
version: Optional[str] = "1"
|
||||||
|
|
||||||
name: str = 'Unnamed Simulation'
|
name: str = "Unnamed Simulation"
|
||||||
description: Optional[str] = None
|
description: Optional[str] = None
|
||||||
group: str = None
|
group: str = None
|
||||||
dir_path: Optional[str] = None
|
dir_path: Optional[str] = None
|
||||||
@@ -141,45 +148,48 @@ class Config(BaseModel, extra=Extra.allow):
|
|||||||
def from_raw(cls, cfg):
|
def from_raw(cls, cfg):
|
||||||
if isinstance(cfg, Config):
|
if isinstance(cfg, Config):
|
||||||
return cfg
|
return cfg
|
||||||
if cfg.get('version', '1') == '1' and any(k in cfg for k in ['agents', 'agent_class', 'topology', 'environment_class']):
|
if cfg.get("version", "1") == "1" and any(
|
||||||
|
k in cfg for k in ["agents", "agent_class", "topology", "environment_class"]
|
||||||
|
):
|
||||||
return convert_old(cfg)
|
return convert_old(cfg)
|
||||||
return Config(**cfg)
|
return Config(**cfg)
|
||||||
|
|
||||||
|
|
||||||
def convert_old(old, strict=True):
|
def convert_old(old, strict=True):
|
||||||
'''
|
"""
|
||||||
Try to convert old style configs into the new format.
|
Try to convert old style configs into the new format.
|
||||||
|
|
||||||
This is still a work in progress and might not work in many cases.
|
This is still a work in progress and might not work in many cases.
|
||||||
'''
|
"""
|
||||||
|
|
||||||
utils.logger.warning('The old configuration format is deprecated. The converted file MAY NOT yield the right results')
|
utils.logger.warning(
|
||||||
|
"The old configuration format is deprecated. The converted file MAY NOT yield the right results"
|
||||||
|
)
|
||||||
|
|
||||||
new = old.copy()
|
new = old.copy()
|
||||||
|
|
||||||
network = {}
|
network = {}
|
||||||
|
|
||||||
if 'topology' in old:
|
if "topology" in old:
|
||||||
del new['topology']
|
del new["topology"]
|
||||||
network['topology'] = old['topology']
|
network["topology"] = old["topology"]
|
||||||
|
|
||||||
if 'network_params' in old and old['network_params']:
|
if "network_params" in old and old["network_params"]:
|
||||||
del new['network_params']
|
del new["network_params"]
|
||||||
for (k, v) in old['network_params'].items():
|
for (k, v) in old["network_params"].items():
|
||||||
if k == 'path':
|
if k == "path":
|
||||||
network['path'] = v
|
network["path"] = v
|
||||||
else:
|
else:
|
||||||
network.setdefault('params', {})[k] = v
|
network.setdefault("params", {})[k] = v
|
||||||
|
|
||||||
topologies = {}
|
topology = None
|
||||||
if network:
|
if network:
|
||||||
topologies['default'] = network
|
topology = network
|
||||||
|
|
||||||
|
agents = {"fixed": [], "distribution": []}
|
||||||
agents = {'fixed': [], 'distribution': []}
|
|
||||||
|
|
||||||
def updated_agent(agent):
|
def updated_agent(agent):
|
||||||
'''Convert an agent definition'''
|
"""Convert an agent definition"""
|
||||||
newagent = dict(agent)
|
newagent = dict(agent)
|
||||||
return newagent
|
return newagent
|
||||||
|
|
||||||
@@ -187,80 +197,74 @@ def convert_old(old, strict=True):
|
|||||||
fixed = []
|
fixed = []
|
||||||
override = []
|
override = []
|
||||||
|
|
||||||
if 'environment_agents' in new:
|
if "environment_agents" in new:
|
||||||
|
|
||||||
for agent in new['environment_agents']:
|
for agent in new["environment_agents"]:
|
||||||
agent.setdefault('state', {})['group'] = 'environment'
|
agent.setdefault("state", {})["group"] = "environment"
|
||||||
if 'agent_id' in agent:
|
if "agent_id" in agent:
|
||||||
agent['state']['name'] = agent['agent_id']
|
agent["state"]["name"] = agent["agent_id"]
|
||||||
del agent['agent_id']
|
del agent["agent_id"]
|
||||||
agent['hidden'] = True
|
agent["hidden"] = True
|
||||||
agent['topology'] = None
|
agent["topology"] = False
|
||||||
fixed.append(updated_agent(agent))
|
fixed.append(updated_agent(agent))
|
||||||
del new['environment_agents']
|
del new["environment_agents"]
|
||||||
|
|
||||||
|
if "agent_class" in old:
|
||||||
|
del new["agent_class"]
|
||||||
|
agents["agent_class"] = old["agent_class"]
|
||||||
|
|
||||||
if 'agent_class' in old:
|
if "default_state" in old:
|
||||||
del new['agent_class']
|
del new["default_state"]
|
||||||
agents['agent_class'] = old['agent_class']
|
agents["state"] = old["default_state"]
|
||||||
|
|
||||||
if 'default_state' in old:
|
if "network_agents" in old:
|
||||||
del new['default_state']
|
agents["topology"] = True
|
||||||
agents['state'] = old['default_state']
|
|
||||||
|
|
||||||
if 'network_agents' in old:
|
agents.setdefault("state", {})["group"] = "network"
|
||||||
agents['topology'] = 'default'
|
|
||||||
|
|
||||||
agents.setdefault('state', {})['group'] = 'network'
|
for agent in new["network_agents"]:
|
||||||
|
|
||||||
for agent in new['network_agents']:
|
|
||||||
agent = updated_agent(agent)
|
agent = updated_agent(agent)
|
||||||
if 'agent_id' in agent:
|
if "agent_id" in agent:
|
||||||
agent['state']['name'] = agent['agent_id']
|
agent["state"]["name"] = agent["agent_id"]
|
||||||
del agent['agent_id']
|
del agent["agent_id"]
|
||||||
fixed.append(agent)
|
fixed.append(agent)
|
||||||
else:
|
else:
|
||||||
by_weight.append(agent)
|
by_weight.append(agent)
|
||||||
del new['network_agents']
|
del new["network_agents"]
|
||||||
|
|
||||||
if 'agent_class' in old and (not fixed and not by_weight):
|
|
||||||
agents['topology'] = 'default'
|
|
||||||
by_weight = [{'agent_class': old['agent_class'], 'weight': 1}]
|
|
||||||
|
|
||||||
|
if "agent_class" in old and (not fixed and not by_weight):
|
||||||
|
agents["topology"] = True
|
||||||
|
by_weight = [{"agent_class": old["agent_class"], "weight": 1}]
|
||||||
|
|
||||||
# TODO: translate states properly
|
# TODO: translate states properly
|
||||||
if 'states' in old:
|
if "states" in old:
|
||||||
del new['states']
|
del new["states"]
|
||||||
states = old['states']
|
states = old["states"]
|
||||||
if isinstance(states, dict):
|
if isinstance(states, dict):
|
||||||
states = states.items()
|
states = states.items()
|
||||||
else:
|
else:
|
||||||
states = enumerate(states)
|
states = enumerate(states)
|
||||||
for (k, v) in states:
|
for (k, v) in states:
|
||||||
override.append({'filter': {'node_id': k},
|
override.append({"filter": {"node_id": k}, "state": v})
|
||||||
'state': v})
|
|
||||||
|
|
||||||
agents['override'] = override
|
|
||||||
agents['fixed'] = fixed
|
|
||||||
agents['distribution'] = by_weight
|
|
||||||
|
|
||||||
|
agents["override"] = override
|
||||||
|
agents["fixed"] = fixed
|
||||||
|
agents["distribution"] = by_weight
|
||||||
|
|
||||||
model_params = {}
|
model_params = {}
|
||||||
if 'environment_params' in new:
|
if "environment_params" in new:
|
||||||
del new['environment_params']
|
del new["environment_params"]
|
||||||
model_params = dict(old['environment_params'])
|
model_params = dict(old["environment_params"])
|
||||||
|
|
||||||
if 'environment_class' in old:
|
if "environment_class" in old:
|
||||||
del new['environment_class']
|
del new["environment_class"]
|
||||||
new['model_class'] = old['environment_class']
|
new["model_class"] = old["environment_class"]
|
||||||
|
|
||||||
if 'dump' in old:
|
if "dump" in old:
|
||||||
del new['dump']
|
del new["dump"]
|
||||||
new['dry_run'] = not old['dump']
|
new["dry_run"] = not old["dump"]
|
||||||
|
|
||||||
model_params['topologies'] = topologies
|
model_params["topology"] = topology
|
||||||
model_params['agents'] = agents
|
model_params["agents"] = agents
|
||||||
|
|
||||||
return Config(version='2',
|
return Config(version="2", model_params=model_params, **new)
|
||||||
model_params=model_params,
|
|
||||||
**new)
|
|
||||||
|
|||||||
@@ -1,6 +1,6 @@
|
|||||||
from mesa import DataCollector as MDC
|
from mesa import DataCollector as MDC
|
||||||
|
|
||||||
class SoilDataCollector(MDC):
|
|
||||||
|
|
||||||
|
class SoilDataCollector(MDC):
|
||||||
def __init__(self, *args, **kwargs):
|
def __init__(self, *args, **kwargs):
|
||||||
super().__init__(*args, **kwargs)
|
super().__init__(*args, **kwargs)
|
||||||
|
|||||||
@@ -18,9 +18,9 @@ def wrapcmd(func):
|
|||||||
known = globals()
|
known = globals()
|
||||||
known.update(self.curframe.f_globals)
|
known.update(self.curframe.f_globals)
|
||||||
known.update(self.curframe.f_locals)
|
known.update(self.curframe.f_locals)
|
||||||
known['agent'] = known.get('self', None)
|
known["agent"] = known.get("self", None)
|
||||||
known['model'] = known.get('self', {}).get('model')
|
known["model"] = known.get("self", {}).get("model")
|
||||||
known['attrs'] = arg.strip().split()
|
known["attrs"] = arg.strip().split()
|
||||||
|
|
||||||
exec(func.__code__, known, known)
|
exec(func.__code__, known, known)
|
||||||
|
|
||||||
@@ -29,10 +29,12 @@ def wrapcmd(func):
|
|||||||
|
|
||||||
class Debug(pdb.Pdb):
|
class Debug(pdb.Pdb):
|
||||||
def __init__(self, *args, skip_soil=False, **kwargs):
|
def __init__(self, *args, skip_soil=False, **kwargs):
|
||||||
skip = kwargs.get('skip', [])
|
skip = kwargs.get("skip", [])
|
||||||
if skip_soil:
|
if skip_soil:
|
||||||
skip.append('soil.*')
|
skip.append("soil")
|
||||||
skip.append('mesa.*')
|
skip.append("contextlib")
|
||||||
|
skip.append("soil.*")
|
||||||
|
skip.append("mesa.*")
|
||||||
super(Debug, self).__init__(*args, skip=skip, **kwargs)
|
super(Debug, self).__init__(*args, skip=skip, **kwargs)
|
||||||
self.prompt = "[soil-pdb] "
|
self.prompt = "[soil-pdb] "
|
||||||
|
|
||||||
@@ -40,7 +42,7 @@ class Debug(pdb.Pdb):
|
|||||||
def _soil_agents(model, attrs=None, pretty=True, **kwargs):
|
def _soil_agents(model, attrs=None, pretty=True, **kwargs):
|
||||||
for agent in model.agents(**kwargs):
|
for agent in model.agents(**kwargs):
|
||||||
d = agent
|
d = agent
|
||||||
print(' - ' + indent(agent.to_str(keys=attrs, pretty=pretty), ' '))
|
print(" - " + indent(agent.to_str(keys=attrs, pretty=pretty), " "))
|
||||||
|
|
||||||
@wrapcmd
|
@wrapcmd
|
||||||
def do_soil_agents():
|
def do_soil_agents():
|
||||||
@@ -50,14 +52,20 @@ class Debug(pdb.Pdb):
|
|||||||
|
|
||||||
@wrapcmd
|
@wrapcmd
|
||||||
def do_soil_list():
|
def do_soil_list():
|
||||||
return Debug._soil_agents(model, attrs=['state_id'], pretty=False)
|
return Debug._soil_agents(model, attrs=["state_id"], pretty=False)
|
||||||
|
|
||||||
do_sl = do_soil_list
|
do_sl = do_soil_list
|
||||||
|
|
||||||
|
def do_continue_state(self, arg):
|
||||||
|
self.do_break_state(arg, temporary=True)
|
||||||
|
return self.do_continue("")
|
||||||
|
|
||||||
|
do_cs = do_continue_state
|
||||||
|
|
||||||
@wrapcmd
|
@wrapcmd
|
||||||
def do_soil_self():
|
def do_soil_agent():
|
||||||
if not agent:
|
if not agent:
|
||||||
print('No agent available')
|
print("No agent available")
|
||||||
return
|
return
|
||||||
|
|
||||||
keys = None
|
keys = None
|
||||||
@@ -70,41 +78,51 @@ class Debug(pdb.Pdb):
|
|||||||
|
|
||||||
print(agent.to_str(pretty=True, keys=keys))
|
print(agent.to_str(pretty=True, keys=keys))
|
||||||
|
|
||||||
do_ss = do_soil_self
|
do_aa = do_soil_agent
|
||||||
|
|
||||||
def do_break_state(self, arg: str, temporary=False):
|
def do_break_state(self, arg: str, instances=None, temporary=False):
|
||||||
'''
|
"""
|
||||||
Break before a specified state is stepped into.
|
Break before a specified state is stepped into.
|
||||||
'''
|
"""
|
||||||
|
|
||||||
klass = None
|
klass = None
|
||||||
state = arg.strip()
|
state = arg
|
||||||
if not state:
|
if not state:
|
||||||
self.error("Specify at least a state name")
|
self.error("Specify at least a state name")
|
||||||
return
|
return
|
||||||
|
|
||||||
comma = arg.find(':')
|
state, *tokens = state.lstrip().split()
|
||||||
if comma > 0:
|
if tokens:
|
||||||
state = arg[comma+1:].lstrip()
|
instances = list(eval(token) for token in tokens)
|
||||||
klass = arg[:comma].rstrip()
|
|
||||||
klass = eval(klass,
|
colon = state.find(":")
|
||||||
self.curframe.f_globals,
|
|
||||||
self.curframe_locals)
|
if colon > 0:
|
||||||
|
klass = state[:colon].rstrip()
|
||||||
|
state = state[colon + 1 :].strip()
|
||||||
|
|
||||||
|
print(klass, state, tokens)
|
||||||
|
klass = eval(klass, self.curframe.f_globals, self.curframe_locals)
|
||||||
|
|
||||||
if klass:
|
if klass:
|
||||||
klasses = [klass]
|
klasses = [klass]
|
||||||
else:
|
else:
|
||||||
klasses = [k for k in self.curframe.f_globals.values() if isinstance(k, type) and issubclass(k, FSM)]
|
klasses = [
|
||||||
print(klasses)
|
k
|
||||||
|
for k in self.curframe.f_globals.values()
|
||||||
|
if isinstance(k, type) and issubclass(k, FSM)
|
||||||
|
]
|
||||||
|
|
||||||
if not klasses:
|
if not klasses:
|
||||||
self.error('No agent classes found')
|
self.error("No agent classes found")
|
||||||
|
|
||||||
for klass in klasses:
|
for klass in klasses:
|
||||||
try:
|
try:
|
||||||
func = getattr(klass, state)
|
func = getattr(klass, state)
|
||||||
except AttributeError:
|
except AttributeError:
|
||||||
|
self.error(f"State {state} not found in class {klass}")
|
||||||
continue
|
continue
|
||||||
if hasattr(func, '__func__'):
|
if hasattr(func, "__func__"):
|
||||||
func = func.__func__
|
func = func.__func__
|
||||||
|
|
||||||
code = func.__code__
|
code = func.__code__
|
||||||
@@ -117,35 +135,56 @@ class Debug(pdb.Pdb):
|
|||||||
# Check for reasonable breakpoint
|
# Check for reasonable breakpoint
|
||||||
line = self.checkline(filename, lineno)
|
line = self.checkline(filename, lineno)
|
||||||
if not line:
|
if not line:
|
||||||
raise ValueError('no line found')
|
raise ValueError("no line found")
|
||||||
# now set the break point
|
# now set the break point
|
||||||
cond = None
|
cond = None
|
||||||
|
if instances:
|
||||||
|
cond = f"self.unique_id in { repr(instances) }"
|
||||||
|
|
||||||
existing = self.get_breaks(filename, line)
|
existing = self.get_breaks(filename, line)
|
||||||
if existing:
|
if existing:
|
||||||
self.message("Breakpoint already exists at %s:%d" %
|
self.message("Breakpoint already exists at %s:%d" % (filename, line))
|
||||||
(filename, line))
|
|
||||||
continue
|
continue
|
||||||
err = self.set_break(filename, line, temporary, cond, funcname)
|
err = self.set_break(filename, line, temporary, cond, funcname)
|
||||||
if err:
|
if err:
|
||||||
self.error(err)
|
self.error(err)
|
||||||
else:
|
else:
|
||||||
bp = self.get_breaks(filename, line)[-1]
|
bp = self.get_breaks(filename, line)[-1]
|
||||||
self.message("Breakpoint %d at %s:%d" %
|
self.message("Breakpoint %d at %s:%d" % (bp.number, bp.file, bp.line))
|
||||||
(bp.number, bp.file, bp.line))
|
|
||||||
do_bs = do_break_state
|
do_bs = do_break_state
|
||||||
|
|
||||||
|
def do_break_state_self(self, arg: str, temporary=False):
|
||||||
|
"""
|
||||||
|
Break before a specified state is stepped into, for the current agent
|
||||||
|
"""
|
||||||
|
agent = self.curframe.f_locals.get("self")
|
||||||
|
if not agent:
|
||||||
|
self.error("No current agent.")
|
||||||
|
self.error("Try this again when the debugger is stopped inside an agent")
|
||||||
|
return
|
||||||
|
|
||||||
def setup(frame=None):
|
arg = f"{agent.__class__.__name__}:{ arg } {agent.unique_id}"
|
||||||
debugger = Debug()
|
return self.do_break_state(arg)
|
||||||
|
|
||||||
|
do_bss = do_break_state_self
|
||||||
|
|
||||||
|
|
||||||
|
debugger = None
|
||||||
|
|
||||||
|
|
||||||
|
def set_trace(frame=None, **kwargs):
|
||||||
|
global debugger
|
||||||
|
if debugger is None:
|
||||||
|
debugger = Debug(**kwargs)
|
||||||
frame = frame or sys._getframe().f_back
|
frame = frame or sys._getframe().f_back
|
||||||
debugger.set_trace(frame)
|
debugger.set_trace(frame)
|
||||||
|
|
||||||
def debug_env():
|
|
||||||
if os.environ.get('SOIL_DEBUG'):
|
|
||||||
return setup(frame=sys._getframe().f_back)
|
|
||||||
|
|
||||||
def post_mortem(traceback=None):
|
def post_mortem(traceback=None, **kwargs):
|
||||||
p = Debug()
|
global debugger
|
||||||
|
if debugger is None:
|
||||||
|
debugger = Debug(**kwargs)
|
||||||
t = sys.exc_info()[2]
|
t = sys.exc_info()[2]
|
||||||
p.reset()
|
debugger.reset()
|
||||||
p.interaction(None, t)
|
debugger.interaction(None, t)
|
||||||
|
|||||||
@@ -3,8 +3,8 @@ from __future__ import annotations
|
|||||||
import os
|
import os
|
||||||
import sqlite3
|
import sqlite3
|
||||||
import math
|
import math
|
||||||
import random
|
|
||||||
import logging
|
import logging
|
||||||
|
import inspect
|
||||||
|
|
||||||
from typing import Any, Dict, Optional, Union
|
from typing import Any, Dict, Optional, Union
|
||||||
from collections import namedtuple
|
from collections import namedtuple
|
||||||
@@ -18,10 +18,7 @@ import networkx as nx
|
|||||||
from mesa import Model
|
from mesa import Model
|
||||||
from mesa.datacollection import DataCollector
|
from mesa.datacollection import DataCollector
|
||||||
|
|
||||||
from . import agents as agentmod, config, serialization, utils, time, network
|
from . import agents as agentmod, config, serialization, utils, time, network, events
|
||||||
|
|
||||||
|
|
||||||
Record = namedtuple('Record', 'dict_id t_step key value')
|
|
||||||
|
|
||||||
|
|
||||||
class BaseEnvironment(Model):
|
class BaseEnvironment(Model):
|
||||||
@@ -37,9 +34,10 @@ class BaseEnvironment(Model):
|
|||||||
:meth:`soil.environment.Environment.get` method.
|
:meth:`soil.environment.Environment.get` method.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
def __init__(self,
|
def __init__(
|
||||||
id='unnamed_env',
|
self,
|
||||||
seed='default',
|
id="unnamed_env",
|
||||||
|
seed="default",
|
||||||
schedule=None,
|
schedule=None,
|
||||||
dir_path=None,
|
dir_path=None,
|
||||||
interval=1,
|
interval=1,
|
||||||
@@ -48,9 +46,12 @@ class BaseEnvironment(Model):
|
|||||||
agent_reporters: Optional[Any] = None,
|
agent_reporters: Optional[Any] = None,
|
||||||
model_reporters: Optional[Any] = None,
|
model_reporters: Optional[Any] = None,
|
||||||
tables: Optional[Any] = None,
|
tables: Optional[Any] = None,
|
||||||
**env_params):
|
**env_params,
|
||||||
|
):
|
||||||
|
|
||||||
super().__init__(seed=seed)
|
super().__init__(seed=seed)
|
||||||
|
self.env_params = env_params or {}
|
||||||
|
|
||||||
self.current_id = -1
|
self.current_id = -1
|
||||||
|
|
||||||
self.id = id
|
self.id = id
|
||||||
@@ -63,11 +64,8 @@ class BaseEnvironment(Model):
|
|||||||
|
|
||||||
self.agent_class = agent_class or agentmod.BaseAgent
|
self.agent_class = agent_class or agentmod.BaseAgent
|
||||||
|
|
||||||
self.init_agents(agents)
|
|
||||||
|
|
||||||
self.env_params = env_params or {}
|
|
||||||
|
|
||||||
self.interval = interval
|
self.interval = interval
|
||||||
|
self.init_agents(agents)
|
||||||
|
|
||||||
self.logger = utils.logger.getChild(self.id)
|
self.logger = utils.logger.getChild(self.id)
|
||||||
|
|
||||||
@@ -77,17 +75,27 @@ class BaseEnvironment(Model):
|
|||||||
tables=tables,
|
tables=tables,
|
||||||
)
|
)
|
||||||
|
|
||||||
def _read_single_agent(self, agent):
|
def _agent_from_dict(self, agent):
|
||||||
|
"""
|
||||||
|
Translate an agent dictionary into an agent
|
||||||
|
"""
|
||||||
agent = dict(**agent)
|
agent = dict(**agent)
|
||||||
cls = agent.pop('agent_class', None) or self.agent_class
|
cls = agent.pop("agent_class", None) or self.agent_class
|
||||||
unique_id = agent.pop('unique_id', None)
|
unique_id = agent.pop("unique_id", None)
|
||||||
if unique_id is None:
|
if unique_id is None:
|
||||||
unique_id = self.next_id()
|
unique_id = self.next_id()
|
||||||
|
|
||||||
return serialization.deserialize(cls)(unique_id=unique_id,
|
return serialization.deserialize(cls)(unique_id=unique_id, model=self, **agent)
|
||||||
model=self, **agent)
|
|
||||||
|
|
||||||
def init_agents(self, agents: Union[config.AgentConfig, [Dict[str, Any]]] = {}):
|
def init_agents(self, agents: Union[config.AgentConfig, [Dict[str, Any]]] = {}):
|
||||||
|
"""
|
||||||
|
Initialize the agents in the model from either a `soil.config.AgentConfig` or a list of
|
||||||
|
dictionaries that each describes an agent.
|
||||||
|
|
||||||
|
If given a list of dictionaries, an agent will be created for each dictionary. The agent
|
||||||
|
class can be specified through the `agent_class` key. The rest of the items will be used
|
||||||
|
as parameters to the agent.
|
||||||
|
"""
|
||||||
if not agents:
|
if not agents:
|
||||||
return
|
return
|
||||||
|
|
||||||
@@ -98,14 +106,11 @@ class BaseEnvironment(Model):
|
|||||||
lst = config.AgentConfig(**agents)
|
lst = config.AgentConfig(**agents)
|
||||||
if lst.override:
|
if lst.override:
|
||||||
override = lst.override
|
override = lst.override
|
||||||
lst = agentmod.from_config(lst,
|
lst = self._agent_dict_from_config(lst)
|
||||||
topologies=getattr(self, 'topologies', None),
|
|
||||||
random=self.random)
|
|
||||||
|
|
||||||
# TODO: check override is working again. It cannot (easily) be part of agents.from_config anymore,
|
# TODO: check override is working again. It cannot (easily) be part of agents.from_config anymore,
|
||||||
# because it needs attribute such as unique_id, which are only present after init
|
# because it needs attribute such as unique_id, which are only present after init
|
||||||
new_agents = [self._read_single_agent(agent) for agent in lst]
|
new_agents = [self._agent_from_dict(agent) for agent in lst]
|
||||||
|
|
||||||
|
|
||||||
for a in new_agents:
|
for a in new_agents:
|
||||||
self.schedule.add(a)
|
self.schedule.add(a)
|
||||||
@@ -115,6 +120,8 @@ class BaseEnvironment(Model):
|
|||||||
for attr, value in rule.state.items():
|
for attr, value in rule.state.items():
|
||||||
setattr(agent, attr, value)
|
setattr(agent, attr, value)
|
||||||
|
|
||||||
|
def _agent_dict_from_config(self, cfg):
|
||||||
|
return agentmod.from_config(cfg, random=self.random)
|
||||||
|
|
||||||
@property
|
@property
|
||||||
def agents(self):
|
def agents(self):
|
||||||
@@ -130,15 +137,16 @@ class BaseEnvironment(Model):
|
|||||||
def now(self):
|
def now(self):
|
||||||
if self.schedule:
|
if self.schedule:
|
||||||
return self.schedule.time
|
return self.schedule.time
|
||||||
raise Exception('The environment has not been scheduled, so it has no sense of time')
|
raise Exception(
|
||||||
|
"The environment has not been scheduled, so it has no sense of time"
|
||||||
|
)
|
||||||
|
|
||||||
|
def add_agent(self, unique_id=None, **kwargs):
|
||||||
|
if unique_id is None:
|
||||||
|
unique_id = self.next_id()
|
||||||
|
|
||||||
def add_agent(self, agent_id, agent_class, **kwargs):
|
kwargs["unique_id"] = unique_id
|
||||||
a = None
|
a = self._agent_from_dict(kwargs)
|
||||||
if agent_class:
|
|
||||||
a = agent_class(model=self,
|
|
||||||
unique_id=agent_id,
|
|
||||||
**kwargs)
|
|
||||||
|
|
||||||
self.schedule.add(a)
|
self.schedule.add(a)
|
||||||
return a
|
return a
|
||||||
@@ -151,16 +159,18 @@ class BaseEnvironment(Model):
|
|||||||
for k, v in kwargs:
|
for k, v in kwargs:
|
||||||
message += " {k}={v} ".format(k, v)
|
message += " {k}={v} ".format(k, v)
|
||||||
extra = {}
|
extra = {}
|
||||||
extra['now'] = self.now
|
extra["now"] = self.now
|
||||||
extra['id'] = self.id
|
extra["id"] = self.id
|
||||||
return self.logger.log(level, message, extra=extra)
|
return self.logger.log(level, message, extra=extra)
|
||||||
|
|
||||||
def step(self):
|
def step(self):
|
||||||
'''
|
"""
|
||||||
Advance one step in the simulation, and update the data collection and scheduler appropriately
|
Advance one step in the simulation, and update the data collection and scheduler appropriately
|
||||||
'''
|
"""
|
||||||
super().step()
|
super().step()
|
||||||
self.logger.info(f'--- Step {self.now:^5} ---')
|
self.logger.info(
|
||||||
|
f"--- Step: {self.schedule.steps:^5} - Time: {self.now:^5} ---"
|
||||||
|
)
|
||||||
self.schedule.step()
|
self.schedule.step()
|
||||||
self.datacollector.collect(self)
|
self.datacollector.collect(self)
|
||||||
|
|
||||||
@@ -168,10 +178,10 @@ class BaseEnvironment(Model):
|
|||||||
return key in self.env_params
|
return key in self.env_params
|
||||||
|
|
||||||
def get(self, key, default=None):
|
def get(self, key, default=None):
|
||||||
'''
|
"""
|
||||||
Get the value of an environment attribute.
|
Get the value of an environment attribute.
|
||||||
Return `default` if the value is not set.
|
Return `default` if the value is not set.
|
||||||
'''
|
"""
|
||||||
return self.env_params.get(key, default)
|
return self.env_params.get(key, default)
|
||||||
|
|
||||||
def __getitem__(self, key):
|
def __getitem__(self, key):
|
||||||
@@ -180,123 +190,135 @@ class BaseEnvironment(Model):
|
|||||||
def __setitem__(self, key, value):
|
def __setitem__(self, key, value):
|
||||||
return self.env_params.__setitem__(key, value)
|
return self.env_params.__setitem__(key, value)
|
||||||
|
|
||||||
def _agent_to_tuples(self, agent, now=None):
|
def __str__(self):
|
||||||
if now is None:
|
return str(self.env_params)
|
||||||
now = self.now
|
|
||||||
for k, v in agent.state.items():
|
|
||||||
yield Record(dict_id=agent.id,
|
|
||||||
t_step=now,
|
|
||||||
key=k,
|
|
||||||
value=v)
|
|
||||||
|
|
||||||
def state_to_tuples(self, agent_id=None, now=None):
|
|
||||||
if now is None:
|
|
||||||
now = self.now
|
|
||||||
|
|
||||||
if agent_id:
|
|
||||||
agent = self.agents[agent_id]
|
|
||||||
yield from self._agent_to_tuples(agent, now)
|
|
||||||
return
|
|
||||||
|
|
||||||
for k, v in self.env_params.items():
|
|
||||||
yield Record(dict_id='env',
|
|
||||||
t_step=now,
|
|
||||||
key=k,
|
|
||||||
value=v)
|
|
||||||
for agent in self.agents:
|
|
||||||
yield from self._agent_to_tuples(agent, now)
|
|
||||||
|
|
||||||
|
|
||||||
class NetworkEnvironment(BaseEnvironment):
|
class NetworkEnvironment(BaseEnvironment):
|
||||||
|
"""
|
||||||
|
The NetworkEnvironment is an environment that includes one or more networkx.Graph intances
|
||||||
|
and methods to associate agents to nodes and vice versa.
|
||||||
|
"""
|
||||||
|
|
||||||
def __init__(self, *args, topology: nx.Graph = None, topologies: Dict[str, config.NetConfig] = {}, **kwargs):
|
def __init__(
|
||||||
agents = kwargs.pop('agents', None)
|
self, *args, topology: Union[config.NetConfig, nx.Graph] = None, **kwargs
|
||||||
|
):
|
||||||
|
agents = kwargs.pop("agents", None)
|
||||||
super().__init__(*args, agents=None, **kwargs)
|
super().__init__(*args, agents=None, **kwargs)
|
||||||
self._node_ids = {}
|
|
||||||
assert not hasattr(self, 'topologies')
|
|
||||||
if topology is not None:
|
|
||||||
if topologies:
|
|
||||||
raise ValueError('Please, provide either a single topology or a dictionary of them')
|
|
||||||
topologies = {'default': topology}
|
|
||||||
|
|
||||||
self.topologies = {}
|
self._set_topology(topology)
|
||||||
for (name, cfg) in topologies.items():
|
|
||||||
self.set_topology(cfg=cfg, graph=name)
|
|
||||||
|
|
||||||
self.init_agents(agents)
|
self.init_agents(agents)
|
||||||
|
|
||||||
|
def init_agents(self, *args, **kwargs):
|
||||||
|
"""Initialize the agents from a"""
|
||||||
|
super().init_agents(*args, **kwargs)
|
||||||
|
for agent in self.schedule._agents.values():
|
||||||
|
if hasattr(agent, "node_id"):
|
||||||
|
self._init_node(agent)
|
||||||
|
|
||||||
def _read_single_agent(self, agent, unique_id=None):
|
def _init_node(self, agent):
|
||||||
|
"""
|
||||||
|
Make sure the node for a given agent has the proper attributes.
|
||||||
|
"""
|
||||||
|
self.G.nodes[agent.node_id]["agent"] = agent
|
||||||
|
|
||||||
|
def _agent_dict_from_config(self, cfg):
|
||||||
|
return agentmod.from_config(cfg, topology=self.G, random=self.random)
|
||||||
|
|
||||||
|
def _agent_from_dict(self, agent, unique_id=None):
|
||||||
agent = dict(agent)
|
agent = dict(agent)
|
||||||
|
|
||||||
if agent.get('topology', None) is not None:
|
if not agent.get("topology", False):
|
||||||
topology = agent.get('topology')
|
return super()._agent_from_dict(agent)
|
||||||
|
|
||||||
if unique_id is None:
|
if unique_id is None:
|
||||||
unique_id = self.next_id()
|
unique_id = self.next_id()
|
||||||
if topology:
|
node_id = agent.get("node_id", None)
|
||||||
node_id = self.agent_to_node(unique_id, graph_name=topology, node_id=agent.get('node_id'))
|
if node_id is None:
|
||||||
agent['node_id'] = node_id
|
node_id = network.find_unassigned(self.G, random=self.random)
|
||||||
agent['topology'] = topology
|
self.G.nodes[node_id]["agent"] = None
|
||||||
agent['unique_id'] = unique_id
|
agent["node_id"] = node_id
|
||||||
|
agent["unique_id"] = unique_id
|
||||||
|
agent["topology"] = self.G
|
||||||
|
node_attrs = self.G.nodes[node_id]
|
||||||
|
node_attrs.update(agent)
|
||||||
|
agent = node_attrs
|
||||||
|
|
||||||
return super()._read_single_agent(agent)
|
a = super()._agent_from_dict(agent)
|
||||||
|
self._init_node(a)
|
||||||
|
|
||||||
|
return a
|
||||||
|
|
||||||
@property
|
def _set_topology(self, cfg=None, dir_path=None):
|
||||||
def topology(self):
|
if cfg is None:
|
||||||
return self.topologies['default']
|
cfg = nx.Graph()
|
||||||
|
elif not isinstance(cfg, nx.Graph):
|
||||||
|
cfg = network.from_config(cfg, dir_path=dir_path or self.dir_path)
|
||||||
|
|
||||||
def set_topology(self, cfg=None, dir_path=None, graph='default'):
|
self.G = cfg
|
||||||
topology = cfg
|
|
||||||
if not isinstance(cfg, nx.Graph):
|
|
||||||
topology = network.from_config(cfg, dir_path=dir_path or self.dir_path)
|
|
||||||
|
|
||||||
self.topologies[graph] = topology
|
|
||||||
|
|
||||||
def topology_for(self, unique_id):
|
|
||||||
return self.topologies[self._node_ids[unique_id][0]]
|
|
||||||
|
|
||||||
@property
|
@property
|
||||||
def network_agents(self):
|
def network_agents(self):
|
||||||
yield from self.agents(agent_class=agentmod.NetworkAgent)
|
for a in self.schedule._agents:
|
||||||
|
if isinstance(a, agentmod.NetworkAgent):
|
||||||
|
yield a
|
||||||
|
|
||||||
def agent_to_node(self, unique_id, graph_name='default',
|
def add_node(self, agent_class, unique_id=None, node_id=None, **kwargs):
|
||||||
node_id=None, shuffle=False):
|
if unique_id is None:
|
||||||
node_id = network.agent_to_node(G=self.topologies[graph_name],
|
|
||||||
agent_id=unique_id,
|
|
||||||
node_id=node_id,
|
|
||||||
shuffle=shuffle,
|
|
||||||
random=self.random)
|
|
||||||
|
|
||||||
self._node_ids[unique_id] = (graph_name, node_id)
|
|
||||||
return node_id
|
|
||||||
|
|
||||||
def add_node(self, agent_class, topology, **kwargs):
|
|
||||||
unique_id = self.next_id()
|
unique_id = self.next_id()
|
||||||
self.topologies[topology].add_node(unique_id)
|
if node_id is None:
|
||||||
node_id = self.agent_to_node(unique_id=unique_id, node_id=unique_id, graph_name=topology)
|
node_id = network.find_unassigned(
|
||||||
|
G=self.G, shuffle=True, random=self.random
|
||||||
|
)
|
||||||
|
if node_id is None:
|
||||||
|
node_id = f"node_for_{unique_id}"
|
||||||
|
|
||||||
a = self.add_agent(unique_id=unique_id, agent_class=agent_class, node_id=node_id, topology=topology, **kwargs)
|
if node_id not in self.G.nodes:
|
||||||
a['visible'] = True
|
self.G.add_node(node_id)
|
||||||
|
|
||||||
|
assert "agent" not in self.G.nodes[node_id]
|
||||||
|
self.G.nodes[node_id]["agent"] = None # Reserve
|
||||||
|
|
||||||
|
a = self.add_agent(
|
||||||
|
unique_id=unique_id,
|
||||||
|
agent_class=agent_class,
|
||||||
|
topology=self.G,
|
||||||
|
node_id=node_id,
|
||||||
|
**kwargs,
|
||||||
|
)
|
||||||
|
a["visible"] = True
|
||||||
return a
|
return a
|
||||||
|
|
||||||
def add_edge(self, agent1, agent2, start=None, graph='default', **attrs):
|
def add_agent(self, *args, **kwargs):
|
||||||
agent1 = agent1.node_id
|
a = super().add_agent(*args, **kwargs)
|
||||||
agent2 = agent2.node_id
|
if "node_id" in a:
|
||||||
return self.topologies[graph].add_edge(agent1, agent2, start=start)
|
assert self.G.nodes[a.node_id]["agent"] == a
|
||||||
|
|
||||||
def add_agent(self, unique_id, state=None, graph='default', **kwargs):
|
|
||||||
node = self.topologies[graph].nodes[unique_id]
|
|
||||||
node_state = node.get('state', {})
|
|
||||||
if node_state:
|
|
||||||
node_state.update(state or {})
|
|
||||||
state = node_state
|
|
||||||
a = super().add_agent(unique_id, state=state, **kwargs)
|
|
||||||
node['agent'] = a
|
|
||||||
return a
|
return a
|
||||||
|
|
||||||
def node_id_for(self, agent_id):
|
def agent_for_node_id(self, node_id):
|
||||||
return self._node_ids[agent_id][1]
|
return self.G.nodes[node_id].get("agent")
|
||||||
|
|
||||||
|
def populate_network(self, agent_class, weights=None, **agent_params):
|
||||||
|
if not hasattr(agent_class, "len"):
|
||||||
|
agent_class = [agent_class]
|
||||||
|
weights = None
|
||||||
|
for (node_id, node) in self.G.nodes(data=True):
|
||||||
|
if "agent" in node:
|
||||||
|
continue
|
||||||
|
a_class = self.random.choices(agent_class, weights)[0]
|
||||||
|
self.add_agent(node_id=node_id, agent_class=a_class, **agent_params)
|
||||||
|
|
||||||
|
|
||||||
Environment = NetworkEnvironment
|
Environment = NetworkEnvironment
|
||||||
|
|
||||||
|
|
||||||
|
class EventedEnvironment(Environment):
|
||||||
|
def broadcast(self, msg, sender, expiration=None, ttl=None, **kwargs):
|
||||||
|
for agent in self.agents(**kwargs):
|
||||||
|
self.logger.info(f'Telling {repr(agent)}: {msg} ttl={ttl}')
|
||||||
|
try:
|
||||||
|
agent._inbox.append(events.Tell(payload=msg, sender=sender, expiration=expiration if ttl is None else self.now+ttl))
|
||||||
|
except AttributeError:
|
||||||
|
self.info(f'Agent {agent.unique_id} cannot receive events')
|
||||||
|
|
||||||
|
|||||||
43
soil/events.py
Normal file
43
soil/events.py
Normal file
@@ -0,0 +1,43 @@
|
|||||||
|
from .time import Cond
|
||||||
|
from dataclasses import dataclass, field
|
||||||
|
from typing import Any
|
||||||
|
from uuid import uuid4
|
||||||
|
|
||||||
|
class Event:
|
||||||
|
pass
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class Message:
|
||||||
|
payload: Any
|
||||||
|
sender: Any = None
|
||||||
|
expiration: float = None
|
||||||
|
id: int = field(default_factory=uuid4)
|
||||||
|
|
||||||
|
def expired(self, when):
|
||||||
|
return self.expiration is not None and self.expiration < when
|
||||||
|
|
||||||
|
class Reply(Message):
|
||||||
|
source: Message
|
||||||
|
|
||||||
|
|
||||||
|
class Ask(Message):
|
||||||
|
reply: Message = None
|
||||||
|
|
||||||
|
def replied(self, expiration=None):
|
||||||
|
def ready(agent):
|
||||||
|
return self.reply is not None or agent.now > expiration
|
||||||
|
|
||||||
|
def value(agent):
|
||||||
|
if agent.now > expiration:
|
||||||
|
raise TimedOut(f'No answer received for {self}')
|
||||||
|
return self.reply
|
||||||
|
|
||||||
|
return Cond(func=ready, return_func=value)
|
||||||
|
|
||||||
|
|
||||||
|
class Tell(Message):
|
||||||
|
pass
|
||||||
|
|
||||||
|
|
||||||
|
class TimedOut(Exception):
|
||||||
|
pass
|
||||||
@@ -1,7 +1,9 @@
|
|||||||
import os
|
import os
|
||||||
|
import sys
|
||||||
from time import time as current_time
|
from time import time as current_time
|
||||||
from io import BytesIO
|
from io import BytesIO
|
||||||
from sqlalchemy import create_engine
|
from sqlalchemy import create_engine
|
||||||
|
from textwrap import dedent, indent
|
||||||
|
|
||||||
|
|
||||||
import matplotlib.pyplot as plt
|
import matplotlib.pyplot as plt
|
||||||
@@ -9,7 +11,7 @@ import networkx as nx
|
|||||||
|
|
||||||
|
|
||||||
from .serialization import deserialize
|
from .serialization import deserialize
|
||||||
from .utils import open_or_reuse, logger, timer
|
from .utils import try_backup, open_or_reuse, logger, timer
|
||||||
|
|
||||||
|
|
||||||
from . import utils, network
|
from . import utils, network
|
||||||
@@ -23,54 +25,58 @@ class DryRunner(BytesIO):
|
|||||||
|
|
||||||
def write(self, txt):
|
def write(self, txt):
|
||||||
if self.__copy_to:
|
if self.__copy_to:
|
||||||
self.__copy_to.write('{}:::{}'.format(self.__fname, txt))
|
self.__copy_to.write("{}:::{}".format(self.__fname, txt))
|
||||||
try:
|
try:
|
||||||
super().write(txt)
|
super().write(txt)
|
||||||
except TypeError:
|
except TypeError:
|
||||||
super().write(bytes(txt, 'utf-8'))
|
super().write(bytes(txt, "utf-8"))
|
||||||
|
|
||||||
def close(self):
|
def close(self):
|
||||||
content = '(binary data not shown)'
|
content = "(binary data not shown)"
|
||||||
try:
|
try:
|
||||||
content = self.getvalue().decode()
|
content = self.getvalue().decode()
|
||||||
except UnicodeDecodeError:
|
except UnicodeDecodeError:
|
||||||
pass
|
pass
|
||||||
logger.info('**Not** written to {} (dry run mode):\n\n{}\n\n'.format(self.__fname, content))
|
logger.info(
|
||||||
|
"**Not** written to {} (dry run mode):\n\n{}\n\n".format(
|
||||||
|
self.__fname, content
|
||||||
|
)
|
||||||
|
)
|
||||||
super().close()
|
super().close()
|
||||||
|
|
||||||
|
|
||||||
class Exporter:
|
class Exporter:
|
||||||
'''
|
"""
|
||||||
Interface for all exporters. It is not necessary, but it is useful
|
Interface for all exporters. It is not necessary, but it is useful
|
||||||
if you don't plan to implement all the methods.
|
if you don't plan to implement all the methods.
|
||||||
'''
|
"""
|
||||||
|
|
||||||
def __init__(self, simulation, outdir=None, dry_run=None, copy_to=None):
|
def __init__(self, simulation, outdir=None, dry_run=None, copy_to=None):
|
||||||
self.simulation = simulation
|
self.simulation = simulation
|
||||||
outdir = outdir or os.path.join(os.getcwd(), 'soil_output')
|
outdir = outdir or os.path.join(os.getcwd(), "soil_output")
|
||||||
self.outdir = os.path.join(outdir,
|
self.outdir = os.path.join(outdir, simulation.group or "", simulation.name)
|
||||||
simulation.group or '',
|
|
||||||
simulation.name)
|
|
||||||
self.dry_run = dry_run
|
self.dry_run = dry_run
|
||||||
|
if copy_to is None and dry_run:
|
||||||
|
copy_to = sys.stdout
|
||||||
self.copy_to = copy_to
|
self.copy_to = copy_to
|
||||||
|
|
||||||
def sim_start(self):
|
def sim_start(self):
|
||||||
'''Method to call when the simulation starts'''
|
"""Method to call when the simulation starts"""
|
||||||
pass
|
pass
|
||||||
|
|
||||||
def sim_end(self):
|
def sim_end(self):
|
||||||
'''Method to call when the simulation ends'''
|
"""Method to call when the simulation ends"""
|
||||||
pass
|
pass
|
||||||
|
|
||||||
def trial_start(self, env):
|
def trial_start(self, env):
|
||||||
'''Method to call when a trial start'''
|
"""Method to call when a trial start"""
|
||||||
pass
|
pass
|
||||||
|
|
||||||
def trial_end(self, env):
|
def trial_end(self, env):
|
||||||
'''Method to call when a trial ends'''
|
"""Method to call when a trial ends"""
|
||||||
pass
|
pass
|
||||||
|
|
||||||
def output(self, f, mode='w', **kwargs):
|
def output(self, f, mode="w", **kwargs):
|
||||||
if self.dry_run:
|
if self.dry_run:
|
||||||
f = DryRunner(f, copy_to=self.copy_to)
|
f = DryRunner(f, copy_to=self.copy_to)
|
||||||
else:
|
else:
|
||||||
@@ -81,46 +87,63 @@ class Exporter:
|
|||||||
pass
|
pass
|
||||||
return open_or_reuse(f, mode=mode, **kwargs)
|
return open_or_reuse(f, mode=mode, **kwargs)
|
||||||
|
|
||||||
|
def get_dfs(self, env):
|
||||||
class default(Exporter):
|
yield from get_dc_dfs(env.datacollector, trial_id=env.id)
|
||||||
'''Default exporter. Writes sqlite results, as well as the simulation YAML'''
|
|
||||||
|
|
||||||
def sim_start(self):
|
|
||||||
if not self.dry_run:
|
|
||||||
logger.info('Dumping results to %s', self.outdir)
|
|
||||||
with self.output(self.simulation.name + '.dumped.yml') as f:
|
|
||||||
f.write(self.simulation.to_yaml())
|
|
||||||
else:
|
|
||||||
logger.info('NOT dumping results')
|
|
||||||
|
|
||||||
def trial_end(self, env):
|
|
||||||
if not self.dry_run:
|
|
||||||
with timer('Dumping simulation {} trial {}'.format(self.simulation.name,
|
|
||||||
env.id)):
|
|
||||||
engine = create_engine('sqlite:///{}.sqlite'.format(env.id), echo=False)
|
|
||||||
|
|
||||||
dc = env.datacollector
|
|
||||||
for (t, df) in get_dc_dfs(dc):
|
|
||||||
df.to_sql(t, con=engine, if_exists='append')
|
|
||||||
|
|
||||||
|
|
||||||
def get_dc_dfs(dc):
|
def get_dc_dfs(dc, trial_id=None):
|
||||||
dfs = {'env': dc.get_model_vars_dataframe(),
|
dfs = {
|
||||||
'agents': dc.get_agent_vars_dataframe() }
|
"env": dc.get_model_vars_dataframe(),
|
||||||
|
"agents": dc.get_agent_vars_dataframe(),
|
||||||
|
}
|
||||||
for table_name in dc.tables:
|
for table_name in dc.tables:
|
||||||
dfs[table_name] = dc.get_table_dataframe(table_name)
|
dfs[table_name] = dc.get_table_dataframe(table_name)
|
||||||
|
if trial_id:
|
||||||
|
for (name, df) in dfs.items():
|
||||||
|
df["trial_id"] = trial_id
|
||||||
yield from dfs.items()
|
yield from dfs.items()
|
||||||
|
|
||||||
|
|
||||||
|
class default(Exporter):
|
||||||
|
"""Default exporter. Writes sqlite results, as well as the simulation YAML"""
|
||||||
|
|
||||||
|
def sim_start(self):
|
||||||
|
if self.dry_run:
|
||||||
|
logger.info("NOT dumping results")
|
||||||
|
return
|
||||||
|
logger.info("Dumping results to %s", self.outdir)
|
||||||
|
with self.output(self.simulation.name + ".dumped.yml") as f:
|
||||||
|
f.write(self.simulation.to_yaml())
|
||||||
|
self.dbpath = os.path.join(self.outdir, f"{self.simulation.name}.sqlite")
|
||||||
|
try_backup(self.dbpath, remove=True)
|
||||||
|
|
||||||
|
def trial_end(self, env):
|
||||||
|
if self.dry_run:
|
||||||
|
logger.info("Running in DRY_RUN mode, the database will NOT be created")
|
||||||
|
return
|
||||||
|
|
||||||
|
with timer(
|
||||||
|
"Dumping simulation {} trial {}".format(self.simulation.name, env.id)
|
||||||
|
):
|
||||||
|
|
||||||
|
engine = create_engine(f"sqlite:///{self.dbpath}", echo=False)
|
||||||
|
|
||||||
|
for (t, df) in self.get_dfs(env):
|
||||||
|
df.to_sql(t, con=engine, if_exists="append")
|
||||||
|
|
||||||
|
|
||||||
class csv(Exporter):
|
class csv(Exporter):
|
||||||
|
|
||||||
'''Export the state of each environment (and its agents) in a separate CSV file'''
|
"""Export the state of each environment (and its agents) in a separate CSV file"""
|
||||||
|
|
||||||
def trial_end(self, env):
|
def trial_end(self, env):
|
||||||
with timer('[CSV] Dumping simulation {} trial {} @ dir {}'.format(self.simulation.name,
|
with timer(
|
||||||
env.id,
|
"[CSV] Dumping simulation {} trial {} @ dir {}".format(
|
||||||
self.outdir)):
|
self.simulation.name, env.id, self.outdir
|
||||||
for (df_name, df) in get_dc_dfs(env.datacollector):
|
)
|
||||||
with self.output('{}.{}.csv'.format(env.id, df_name)) as f:
|
):
|
||||||
|
for (df_name, df) in self.get_dfs(env):
|
||||||
|
with self.output("{}.{}.csv".format(env.id, df_name)) as f:
|
||||||
df.to_csv(f)
|
df.to_csv(f)
|
||||||
|
|
||||||
|
|
||||||
@@ -128,87 +151,63 @@ class csv(Exporter):
|
|||||||
class gexf(Exporter):
|
class gexf(Exporter):
|
||||||
def trial_end(self, env):
|
def trial_end(self, env):
|
||||||
if self.dry_run:
|
if self.dry_run:
|
||||||
logger.info('Not dumping GEXF in dry_run mode')
|
logger.info("Not dumping GEXF in dry_run mode")
|
||||||
return
|
return
|
||||||
|
|
||||||
with timer('[GEXF] Dumping simulation {} trial {}'.format(self.simulation.name,
|
with timer(
|
||||||
env.id)):
|
"[GEXF] Dumping simulation {} trial {}".format(self.simulation.name, env.id)
|
||||||
with self.output('{}.gexf'.format(env.id), mode='wb') as f:
|
):
|
||||||
|
with self.output("{}.gexf".format(env.id), mode="wb") as f:
|
||||||
network.dump_gexf(env.history_to_graph(), f)
|
network.dump_gexf(env.history_to_graph(), f)
|
||||||
self.dump_gexf(env, f)
|
self.dump_gexf(env, f)
|
||||||
|
|
||||||
|
|
||||||
class dummy(Exporter):
|
class dummy(Exporter):
|
||||||
|
|
||||||
def sim_start(self):
|
def sim_start(self):
|
||||||
with self.output('dummy', 'w') as f:
|
with self.output("dummy", "w") as f:
|
||||||
f.write('simulation started @ {}\n'.format(current_time()))
|
f.write("simulation started @ {}\n".format(current_time()))
|
||||||
|
|
||||||
def trial_start(self, env):
|
def trial_start(self, env):
|
||||||
with self.output('dummy', 'w') as f:
|
with self.output("dummy", "w") as f:
|
||||||
f.write('trial started@ {}\n'.format(current_time()))
|
f.write("trial started@ {}\n".format(current_time()))
|
||||||
|
|
||||||
def trial_end(self, env):
|
def trial_end(self, env):
|
||||||
with self.output('dummy', 'w') as f:
|
with self.output("dummy", "w") as f:
|
||||||
f.write('trial ended@ {}\n'.format(current_time()))
|
f.write("trial ended@ {}\n".format(current_time()))
|
||||||
|
|
||||||
def sim_end(self):
|
def sim_end(self):
|
||||||
with self.output('dummy', 'a') as f:
|
with self.output("dummy", "a") as f:
|
||||||
f.write('simulation ended @ {}\n'.format(current_time()))
|
f.write("simulation ended @ {}\n".format(current_time()))
|
||||||
|
|
||||||
|
|
||||||
class graphdrawing(Exporter):
|
class graphdrawing(Exporter):
|
||||||
|
|
||||||
def trial_end(self, env):
|
def trial_end(self, env):
|
||||||
# Outside effects
|
# Outside effects
|
||||||
f = plt.figure()
|
f = plt.figure()
|
||||||
nx.draw(env.G, node_size=10, width=0.2, pos=nx.spring_layout(env.G, scale=100), ax=f.add_subplot(111))
|
nx.draw(
|
||||||
with open('graph-{}.png'.format(env.id)) as f:
|
env.G,
|
||||||
|
node_size=10,
|
||||||
|
width=0.2,
|
||||||
|
pos=nx.spring_layout(env.G, scale=100),
|
||||||
|
ax=f.add_subplot(111),
|
||||||
|
)
|
||||||
|
with open("graph-{}.png".format(env.id)) as f:
|
||||||
f.savefig(f)
|
f.savefig(f)
|
||||||
|
|
||||||
'''
|
|
||||||
Convert an environment into a NetworkX graph
|
|
||||||
'''
|
|
||||||
def env_to_graph(env, history=None):
|
|
||||||
G = nx.Graph(env.G)
|
|
||||||
|
|
||||||
for agent in env.network_agents:
|
class summary(Exporter):
|
||||||
|
"""Print a summary of each trial to sys.stdout"""
|
||||||
|
|
||||||
attributes = {'agent': str(agent.__class__)}
|
def trial_end(self, env):
|
||||||
lastattributes = {}
|
for (t, df) in self.get_dfs(env):
|
||||||
spells = []
|
if not len(df):
|
||||||
lastvisible = False
|
|
||||||
laststep = None
|
|
||||||
if not history:
|
|
||||||
history = sorted(list(env.state_to_tuples()))
|
|
||||||
for _, t_step, attribute, value in history:
|
|
||||||
if attribute == 'visible':
|
|
||||||
nowvisible = value
|
|
||||||
if nowvisible and not lastvisible:
|
|
||||||
laststep = t_step
|
|
||||||
if not nowvisible and lastvisible:
|
|
||||||
spells.append((laststep, t_step))
|
|
||||||
|
|
||||||
lastvisible = nowvisible
|
|
||||||
continue
|
continue
|
||||||
key = 'attr_' + attribute
|
msg = indent(str(df.describe()), " ")
|
||||||
if key not in attributes:
|
logger.info(
|
||||||
attributes[key] = list()
|
dedent(
|
||||||
if key not in lastattributes:
|
f"""
|
||||||
lastattributes[key] = (value, t_step)
|
Dataframe {t}:
|
||||||
elif lastattributes[key][0] != value:
|
"""
|
||||||
last_value, laststep = lastattributes[key]
|
)
|
||||||
commit_value = (last_value, laststep, t_step)
|
+ msg
|
||||||
if key not in attributes:
|
)
|
||||||
attributes[key] = list()
|
|
||||||
attributes[key].append(commit_value)
|
|
||||||
lastattributes[key] = (value, t_step)
|
|
||||||
for k, v in lastattributes.items():
|
|
||||||
attributes[k].append((v[0], v[1], None))
|
|
||||||
if lastvisible:
|
|
||||||
spells.append((laststep, None))
|
|
||||||
if spells:
|
|
||||||
G.add_node(agent.id, spells=spells, **attributes)
|
|
||||||
else:
|
|
||||||
G.add_node(agent.id, **attributes)
|
|
||||||
|
|
||||||
return G
|
|
||||||
|
|||||||
@@ -9,6 +9,7 @@ import networkx as nx
|
|||||||
|
|
||||||
from . import config, serialization, basestring
|
from . import config, serialization, basestring
|
||||||
|
|
||||||
|
|
||||||
def from_config(cfg: config.NetConfig, dir_path: str = None):
|
def from_config(cfg: config.NetConfig, dir_path: str = None):
|
||||||
if not isinstance(cfg, config.NetConfig):
|
if not isinstance(cfg, config.NetConfig):
|
||||||
cfg = config.NetConfig(**cfg)
|
cfg = config.NetConfig(**cfg)
|
||||||
@@ -19,60 +20,65 @@ def from_config(cfg: config.NetConfig, dir_path: str = None):
|
|||||||
path = os.path.join(dir_path, path)
|
path = os.path.join(dir_path, path)
|
||||||
extension = os.path.splitext(path)[1][1:]
|
extension = os.path.splitext(path)[1][1:]
|
||||||
kwargs = {}
|
kwargs = {}
|
||||||
if extension == 'gexf':
|
if extension == "gexf":
|
||||||
kwargs['version'] = '1.2draft'
|
kwargs["version"] = "1.2draft"
|
||||||
kwargs['node_type'] = int
|
kwargs["node_type"] = int
|
||||||
try:
|
try:
|
||||||
method = getattr(nx.readwrite, 'read_' + extension)
|
method = getattr(nx.readwrite, "read_" + extension)
|
||||||
except AttributeError:
|
except AttributeError:
|
||||||
raise AttributeError('Unknown format')
|
raise AttributeError("Unknown format")
|
||||||
return method(path, **kwargs)
|
return method(path, **kwargs)
|
||||||
|
|
||||||
if cfg.params:
|
if cfg.params:
|
||||||
net_args = cfg.params.dict()
|
net_args = cfg.params.dict()
|
||||||
net_gen = net_args.pop('generator')
|
net_gen = net_args.pop("generator")
|
||||||
|
|
||||||
if dir_path not in sys.path:
|
if dir_path not in sys.path:
|
||||||
sys.path.append(dir_path)
|
sys.path.append(dir_path)
|
||||||
|
|
||||||
method = serialization.deserializer(net_gen,
|
method = serialization.deserializer(
|
||||||
known_modules=['networkx.generators',])
|
net_gen,
|
||||||
|
known_modules=[
|
||||||
|
"networkx.generators",
|
||||||
|
],
|
||||||
|
)
|
||||||
return method(**net_args)
|
return method(**net_args)
|
||||||
|
|
||||||
if isinstance(cfg.topology, config.Topology):
|
if isinstance(cfg.fixed, config.Topology):
|
||||||
cfg = cfg.topology.dict()
|
cfg = cfg.fixed.dict()
|
||||||
|
|
||||||
if isinstance(cfg, str) or isinstance(cfg, dict):
|
if isinstance(cfg, str) or isinstance(cfg, dict):
|
||||||
return nx.json_graph.node_link_graph(cfg)
|
return nx.json_graph.node_link_graph(cfg)
|
||||||
|
|
||||||
return nx.Graph()
|
return nx.Graph()
|
||||||
|
|
||||||
|
|
||||||
def agent_to_node(G, agent_id, node_id=None, shuffle=False, random=random):
|
def find_unassigned(G, shuffle=False, random=random):
|
||||||
'''
|
"""
|
||||||
Link an agent to a node in a topology.
|
Link an agent to a node in a topology.
|
||||||
|
|
||||||
If node_id is None, a node without an agent_id will be found.
|
If node_id is None, a node without an agent_id will be found.
|
||||||
'''
|
"""
|
||||||
# TODO: test
|
# TODO: test
|
||||||
if node_id is None:
|
|
||||||
candidates = list(G.nodes(data=True))
|
candidates = list(G.nodes(data=True))
|
||||||
if shuffle:
|
if shuffle:
|
||||||
random.shuffle(candidates)
|
random.shuffle(candidates)
|
||||||
for next_id, data in candidates:
|
for next_id, data in candidates:
|
||||||
if data.get('agent_id', None) is None:
|
if "agent" not in data:
|
||||||
node_id = next_id
|
return next_id
|
||||||
break
|
return None
|
||||||
|
|
||||||
if node_id is None:
|
|
||||||
raise ValueError(f"Not enough nodes in topology to assign one to agent {agent_id}")
|
|
||||||
G.nodes[node_id]['agent_id'] = agent_id
|
|
||||||
return node_id
|
|
||||||
|
|
||||||
|
|
||||||
def dump_gexf(G, f):
|
def dump_gexf(G, f):
|
||||||
for node in G.nodes():
|
for node in G.nodes():
|
||||||
if 'pos' in G.nodes[node]:
|
if "pos" in G.nodes[node]:
|
||||||
G.nodes[node]['viz'] = {"position": {"x": G.nodes[node]['pos'][0], "y": G.nodes[node]['pos'][1], "z": 0.0}}
|
G.nodes[node]["viz"] = {
|
||||||
del (G.nodes[node]['pos'])
|
"position": {
|
||||||
|
"x": G.nodes[node]["pos"][0],
|
||||||
|
"y": G.nodes[node]["pos"][1],
|
||||||
|
"z": 0.0,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
del G.nodes[node]["pos"]
|
||||||
|
|
||||||
nx.write_gexf(G, f, version="1.2draft")
|
nx.write_gexf(G, f, version="1.2draft")
|
||||||
|
|||||||
@@ -15,49 +15,14 @@ import networkx as nx
|
|||||||
from jinja2 import Template
|
from jinja2 import Template
|
||||||
|
|
||||||
|
|
||||||
logger = logging.getLogger('soil')
|
logger = logging.getLogger("soil")
|
||||||
|
|
||||||
|
|
||||||
# def load_network(network_params, dir_path=None):
|
|
||||||
# G = nx.Graph()
|
|
||||||
|
|
||||||
# if not network_params:
|
|
||||||
# return G
|
|
||||||
|
|
||||||
# if 'path' in network_params:
|
|
||||||
# path = network_params['path']
|
|
||||||
# if dir_path and not os.path.isabs(path):
|
|
||||||
# path = os.path.join(dir_path, path)
|
|
||||||
# extension = os.path.splitext(path)[1][1:]
|
|
||||||
# kwargs = {}
|
|
||||||
# if extension == 'gexf':
|
|
||||||
# kwargs['version'] = '1.2draft'
|
|
||||||
# kwargs['node_type'] = int
|
|
||||||
# try:
|
|
||||||
# method = getattr(nx.readwrite, 'read_' + extension)
|
|
||||||
# except AttributeError:
|
|
||||||
# raise AttributeError('Unknown format')
|
|
||||||
# G = method(path, **kwargs)
|
|
||||||
|
|
||||||
# elif 'generator' in network_params:
|
|
||||||
# net_args = network_params.copy()
|
|
||||||
# net_gen = net_args.pop('generator')
|
|
||||||
|
|
||||||
# if dir_path not in sys.path:
|
|
||||||
# sys.path.append(dir_path)
|
|
||||||
|
|
||||||
# method = deserializer(net_gen,
|
|
||||||
# known_modules=['networkx.generators',])
|
|
||||||
# G = method(**net_args)
|
|
||||||
|
|
||||||
# return G
|
|
||||||
|
|
||||||
|
|
||||||
def load_file(infile):
|
def load_file(infile):
|
||||||
folder = os.path.dirname(infile)
|
folder = os.path.dirname(infile)
|
||||||
if folder not in sys.path:
|
if folder not in sys.path:
|
||||||
sys.path.append(folder)
|
sys.path.append(folder)
|
||||||
with open(infile, 'r') as f:
|
with open(infile, "r") as f:
|
||||||
return list(chain.from_iterable(map(expand_template, load_string(f))))
|
return list(chain.from_iterable(map(expand_template, load_string(f))))
|
||||||
|
|
||||||
|
|
||||||
@@ -66,14 +31,15 @@ def load_string(string):
|
|||||||
|
|
||||||
|
|
||||||
def expand_template(config):
|
def expand_template(config):
|
||||||
if 'template' not in config:
|
if "template" not in config:
|
||||||
yield config
|
yield config
|
||||||
return
|
return
|
||||||
if 'vars' not in config:
|
if "vars" not in config:
|
||||||
raise ValueError(('You must provide a definition of variables'
|
raise ValueError(
|
||||||
' for the template.'))
|
("You must provide a definition of variables" " for the template.")
|
||||||
|
)
|
||||||
|
|
||||||
template = config['template']
|
template = config["template"]
|
||||||
|
|
||||||
if not isinstance(template, str):
|
if not isinstance(template, str):
|
||||||
template = yaml.dump(template)
|
template = yaml.dump(template)
|
||||||
@@ -85,9 +51,9 @@ def expand_template(config):
|
|||||||
blank_str = template.render({k: 0 for k in params[0].keys()})
|
blank_str = template.render({k: 0 for k in params[0].keys()})
|
||||||
blank = list(load_string(blank_str))
|
blank = list(load_string(blank_str))
|
||||||
if len(blank) > 1:
|
if len(blank) > 1:
|
||||||
raise ValueError('Templates must not return more than one configuration')
|
raise ValueError("Templates must not return more than one configuration")
|
||||||
if 'name' in blank[0]:
|
if "name" in blank[0]:
|
||||||
raise ValueError('Templates cannot be named, use group instead')
|
raise ValueError("Templates cannot be named, use group instead")
|
||||||
|
|
||||||
for ps in params:
|
for ps in params:
|
||||||
string = template.render(ps)
|
string = template.render(ps)
|
||||||
@@ -96,24 +62,24 @@ def expand_template(config):
|
|||||||
|
|
||||||
|
|
||||||
def params_for_template(config):
|
def params_for_template(config):
|
||||||
sampler_config = config.get('sampler', {'N': 100})
|
sampler_config = config.get("sampler", {"N": 100})
|
||||||
sampler = sampler_config.pop('method', 'SALib.sample.morris.sample')
|
sampler = sampler_config.pop("method", "SALib.sample.morris.sample")
|
||||||
sampler = deserializer(sampler)
|
sampler = deserializer(sampler)
|
||||||
bounds = config['vars']['bounds']
|
bounds = config["vars"]["bounds"]
|
||||||
|
|
||||||
problem = {
|
problem = {
|
||||||
'num_vars': len(bounds),
|
"num_vars": len(bounds),
|
||||||
'names': list(bounds.keys()),
|
"names": list(bounds.keys()),
|
||||||
'bounds': list(v for v in bounds.values())
|
"bounds": list(v for v in bounds.values()),
|
||||||
}
|
}
|
||||||
samples = sampler(problem, **sampler_config)
|
samples = sampler(problem, **sampler_config)
|
||||||
|
|
||||||
lists = config['vars'].get('lists', {})
|
lists = config["vars"].get("lists", {})
|
||||||
names = list(lists.keys())
|
names = list(lists.keys())
|
||||||
values = list(lists.values())
|
values = list(lists.values())
|
||||||
combs = list(product(*values))
|
combs = list(product(*values))
|
||||||
|
|
||||||
allnames = names + problem['names']
|
allnames = names + problem["names"]
|
||||||
allvalues = [(list(i[0]) + list(i[1])) for i in product(combs, samples)]
|
allvalues = [(list(i[0]) + list(i[1])) for i in product(combs, samples)]
|
||||||
params = list(map(lambda x: dict(zip(allnames, x)), allvalues))
|
params = list(map(lambda x: dict(zip(allnames, x)), allvalues))
|
||||||
return params
|
return params
|
||||||
@@ -121,7 +87,7 @@ def params_for_template(config):
|
|||||||
|
|
||||||
def load_files(*patterns, **kwargs):
|
def load_files(*patterns, **kwargs):
|
||||||
for pattern in patterns:
|
for pattern in patterns:
|
||||||
for i in glob(pattern, **kwargs):
|
for i in glob(pattern, **kwargs, recursive=True):
|
||||||
for cfg in load_file(i):
|
for cfg in load_file(i):
|
||||||
path = os.path.abspath(i)
|
path = os.path.abspath(i)
|
||||||
yield Config.from_raw(cfg), path
|
yield Config.from_raw(cfg), path
|
||||||
@@ -136,22 +102,24 @@ def load_config(cfg):
|
|||||||
yield from load_files(cfg)
|
yield from load_files(cfg)
|
||||||
|
|
||||||
|
|
||||||
builtins = importlib.import_module('builtins')
|
builtins = importlib.import_module("builtins")
|
||||||
|
|
||||||
KNOWN_MODULES = ['soil', ]
|
KNOWN_MODULES = [
|
||||||
|
"soil",
|
||||||
|
]
|
||||||
|
|
||||||
|
|
||||||
def name(value, known_modules=KNOWN_MODULES):
|
def name(value, known_modules=KNOWN_MODULES):
|
||||||
'''Return a name that can be imported, to serialize/deserialize an object'''
|
"""Return a name that can be imported, to serialize/deserialize an object"""
|
||||||
if value is None:
|
if value is None:
|
||||||
return 'None'
|
return "None"
|
||||||
if not isinstance(value, type): # Get the class name first
|
if not isinstance(value, type): # Get the class name first
|
||||||
value = type(value)
|
value = type(value)
|
||||||
tname = value.__name__
|
tname = value.__name__
|
||||||
if hasattr(builtins, tname):
|
if hasattr(builtins, tname):
|
||||||
return tname
|
return tname
|
||||||
modname = value.__module__
|
modname = value.__module__
|
||||||
if modname == '__main__':
|
if modname == "__main__":
|
||||||
return tname
|
return tname
|
||||||
if known_modules and modname in known_modules:
|
if known_modules and modname in known_modules:
|
||||||
return tname
|
return tname
|
||||||
@@ -161,17 +129,17 @@ def name(value, known_modules=KNOWN_MODULES):
|
|||||||
module = importlib.import_module(kmod)
|
module = importlib.import_module(kmod)
|
||||||
if hasattr(module, tname):
|
if hasattr(module, tname):
|
||||||
return tname
|
return tname
|
||||||
return '{}.{}'.format(modname, tname)
|
return "{}.{}".format(modname, tname)
|
||||||
|
|
||||||
|
|
||||||
def serializer(type_):
|
def serializer(type_):
|
||||||
if type_ != 'str' and hasattr(builtins, type_):
|
if type_ != "str" and hasattr(builtins, type_):
|
||||||
return repr
|
return repr
|
||||||
return lambda x: x
|
return lambda x: x
|
||||||
|
|
||||||
|
|
||||||
def serialize(v, known_modules=KNOWN_MODULES):
|
def serialize(v, known_modules=KNOWN_MODULES):
|
||||||
'''Get a text representation of an object.'''
|
"""Get a text representation of an object."""
|
||||||
tname = name(v, known_modules=known_modules)
|
tname = name(v, known_modules=known_modules)
|
||||||
func = serializer(tname)
|
func = serializer(tname)
|
||||||
return func(v), tname
|
return func(v), tname
|
||||||
@@ -196,9 +164,9 @@ IS_CLASS = re.compile(r"<class '(.*)'>")
|
|||||||
def deserializer(type_, known_modules=KNOWN_MODULES):
|
def deserializer(type_, known_modules=KNOWN_MODULES):
|
||||||
if type(type_) != str: # Already deserialized
|
if type(type_) != str: # Already deserialized
|
||||||
return type_
|
return type_
|
||||||
if type_ == 'str':
|
if type_ == "str":
|
||||||
return lambda x='': x
|
return lambda x="": x
|
||||||
if type_ == 'None':
|
if type_ == "None":
|
||||||
return lambda x=None: None
|
return lambda x=None: None
|
||||||
if hasattr(builtins, type_): # Check if it's a builtin type
|
if hasattr(builtins, type_): # Check if it's a builtin type
|
||||||
cls = getattr(builtins, type_)
|
cls = getattr(builtins, type_)
|
||||||
@@ -208,7 +176,7 @@ def deserializer(type_, known_modules=KNOWN_MODULES):
|
|||||||
modname, tname = match.group(1).rsplit(".", 1)
|
modname, tname = match.group(1).rsplit(".", 1)
|
||||||
module = importlib.import_module(modname)
|
module = importlib.import_module(modname)
|
||||||
cls = getattr(module, tname)
|
cls = getattr(module, tname)
|
||||||
return getattr(cls, 'deserialize', cls)
|
return getattr(cls, "deserialize", cls)
|
||||||
|
|
||||||
# Otherwise, see if we can find the module and the class
|
# Otherwise, see if we can find the module and the class
|
||||||
options = []
|
options = []
|
||||||
@@ -217,7 +185,7 @@ def deserializer(type_, known_modules=KNOWN_MODULES):
|
|||||||
if mod:
|
if mod:
|
||||||
options.append((mod, type_))
|
options.append((mod, type_))
|
||||||
|
|
||||||
if '.' in type_: # Fully qualified module
|
if "." in type_: # Fully qualified module
|
||||||
module, type_ = type_.rsplit(".", 1)
|
module, type_ = type_.rsplit(".", 1)
|
||||||
options.append((module, type_))
|
options.append((module, type_))
|
||||||
|
|
||||||
@@ -226,27 +194,37 @@ def deserializer(type_, known_modules=KNOWN_MODULES):
|
|||||||
try:
|
try:
|
||||||
module = importlib.import_module(modname)
|
module = importlib.import_module(modname)
|
||||||
cls = getattr(module, tname)
|
cls = getattr(module, tname)
|
||||||
return getattr(cls, 'deserialize', cls)
|
return getattr(cls, "deserialize", cls)
|
||||||
except (ImportError, AttributeError) as ex:
|
except (ImportError, AttributeError) as ex:
|
||||||
errors.append((modname, tname, ex))
|
errors.append((modname, tname, ex))
|
||||||
raise Exception('Could not find type {}. Tried: {}'.format(type_, errors))
|
raise ValueError('Could not find type "{}". Tried: {}'.format(type_, errors))
|
||||||
|
|
||||||
|
|
||||||
def deserialize(type_, value=None, **kwargs):
|
def deserialize(type_, value=None, globs=None, **kwargs):
|
||||||
'''Get an object from a text representation'''
|
"""Get an object from a text representation"""
|
||||||
if not isinstance(type_, str):
|
if not isinstance(type_, str):
|
||||||
return type_
|
return type_
|
||||||
|
if globs and type_ in globs:
|
||||||
|
des = globs[type_]
|
||||||
|
else:
|
||||||
|
try:
|
||||||
des = deserializer(type_, **kwargs)
|
des = deserializer(type_, **kwargs)
|
||||||
|
except ValueError as ex:
|
||||||
|
try:
|
||||||
|
des = eval(type_)
|
||||||
|
except Exception:
|
||||||
|
raise ex
|
||||||
if value is None:
|
if value is None:
|
||||||
return des
|
return des
|
||||||
return des(value)
|
return des(value)
|
||||||
|
|
||||||
|
|
||||||
def deserialize_all(names, *args, known_modules=KNOWN_MODULES, **kwargs):
|
def deserialize_all(names, *args, known_modules=KNOWN_MODULES, **kwargs):
|
||||||
'''Return the list of deserialized objects'''
|
"""Return the list of deserialized objects"""
|
||||||
|
# TODO: remove
|
||||||
|
print("SERIALIZATION", kwargs)
|
||||||
objects = []
|
objects = []
|
||||||
for name in names:
|
for name in names:
|
||||||
mod = deserialize(name, known_modules=known_modules)
|
mod = deserialize(name, known_modules=known_modules)
|
||||||
objects.append(mod(*args, **kwargs))
|
objects.append(mod(*args, **kwargs))
|
||||||
return objects
|
return objects
|
||||||
|
|
||||||
|
|||||||
@@ -11,18 +11,16 @@ import networkx as nx
|
|||||||
from textwrap import dedent
|
from textwrap import dedent
|
||||||
|
|
||||||
from dataclasses import dataclass, field, asdict
|
from dataclasses import dataclass, field, asdict
|
||||||
from typing import Any, Dict, Union, Optional
|
from typing import Any, Dict, Union, Optional, List
|
||||||
|
|
||||||
|
|
||||||
from networkx.readwrite import json_graph
|
from networkx.readwrite import json_graph
|
||||||
from functools import partial
|
from functools import partial
|
||||||
import pickle
|
import pickle
|
||||||
|
|
||||||
from . import serialization, utils, basestring, agents
|
from . import serialization, exporters, utils, basestring, agents
|
||||||
from .environment import Environment
|
from .environment import Environment
|
||||||
from .utils import logger, run_and_return_exceptions
|
from .utils import logger, run_and_return_exceptions
|
||||||
from .exporters import default
|
|
||||||
from .time import INFINITY
|
|
||||||
from .config import Config, convert_old
|
from .config import Config, convert_old
|
||||||
|
|
||||||
|
|
||||||
@@ -35,74 +33,105 @@ class Simulation:
|
|||||||
config (optional): :class:`config.Config`
|
config (optional): :class:`config.Config`
|
||||||
name of the Simulation
|
name of the Simulation
|
||||||
|
|
||||||
kwargs: parameters to use to initialize a new configuration, if one has not been provided.
|
kwargs: parameters to use to initialize a new configuration, if one not been provided.
|
||||||
"""
|
"""
|
||||||
version: str = '2'
|
|
||||||
name: str = 'Unnamed simulation'
|
version: str = "2"
|
||||||
description: Optional[str] = ''
|
name: str = "Unnamed simulation"
|
||||||
|
description: Optional[str] = ""
|
||||||
group: str = None
|
group: str = None
|
||||||
model_class: Union[str, type] = 'soil.Environment'
|
model_class: Union[str, type] = "soil.Environment"
|
||||||
model_params: dict = field(default_factory=dict)
|
model_params: dict = field(default_factory=dict)
|
||||||
seed: str = field(default_factory=lambda: current_time())
|
seed: str = field(default_factory=lambda: current_time())
|
||||||
dir_path: str = field(default_factory=lambda: os.getcwd())
|
dir_path: str = field(default_factory=lambda: os.getcwd())
|
||||||
max_time: float = float('inf')
|
max_time: float = float("inf")
|
||||||
max_steps: int = -1
|
max_steps: int = -1
|
||||||
interval: int = 1
|
interval: int = 1
|
||||||
num_trials: int = 3
|
num_trials: int = 1
|
||||||
|
parallel: Optional[bool] = None
|
||||||
|
exporters: Optional[List[str]] = field(default_factory=list)
|
||||||
|
outdir: Optional[str] = None
|
||||||
|
exporter_params: Optional[Dict[str, Any]] = field(default_factory=dict)
|
||||||
dry_run: bool = False
|
dry_run: bool = False
|
||||||
extra: Dict[str, Any] = field(default_factory=dict)
|
extra: Dict[str, Any] = field(default_factory=dict)
|
||||||
|
|
||||||
@classmethod
|
@classmethod
|
||||||
def from_dict(cls, env):
|
def from_dict(cls, env, **kwargs):
|
||||||
|
|
||||||
ignored = {k: v for k, v in env.items()
|
ignored = {
|
||||||
if k not in inspect.signature(cls).parameters}
|
k: v for k, v in env.items() if k not in inspect.signature(cls).parameters
|
||||||
|
}
|
||||||
|
|
||||||
kwargs = {k:v for k, v in env.items() if k not in ignored}
|
d = {k: v for k, v in env.items() if k not in ignored}
|
||||||
if ignored:
|
if ignored:
|
||||||
kwargs.setdefault('extra', {}).update(ignored)
|
d.setdefault("extra", {}).update(ignored)
|
||||||
if ignored:
|
if ignored:
|
||||||
print(f'Warning: Ignoring these parameters (added to "extra"): { ignored }')
|
print(f'Warning: Ignoring these parameters (added to "extra"): { ignored }')
|
||||||
|
d.update(kwargs)
|
||||||
|
|
||||||
return cls(**kwargs)
|
return cls(**d)
|
||||||
|
|
||||||
def run_simulation(self, *args, **kwargs):
|
def run_simulation(self, *args, **kwargs):
|
||||||
return self.run(*args, **kwargs)
|
return self.run(*args, **kwargs)
|
||||||
|
|
||||||
def run(self, *args, **kwargs):
|
def run(self, *args, **kwargs):
|
||||||
'''Run the simulation and return the list of resulting environments'''
|
"""Run the simulation and return the list of resulting environments"""
|
||||||
logger.info(dedent('''
|
logger.info(
|
||||||
|
dedent(
|
||||||
|
"""
|
||||||
Simulation:
|
Simulation:
|
||||||
---
|
---
|
||||||
''') +
|
"""
|
||||||
self.to_yaml())
|
)
|
||||||
|
+ self.to_yaml()
|
||||||
|
)
|
||||||
return list(self.run_gen(*args, **kwargs))
|
return list(self.run_gen(*args, **kwargs))
|
||||||
|
|
||||||
def run_gen(self, parallel=False, dry_run=False,
|
def run_gen(
|
||||||
exporters=[default, ], outdir=None, exporter_params={},
|
self,
|
||||||
|
parallel=False,
|
||||||
|
dry_run=None,
|
||||||
|
exporters=None,
|
||||||
|
outdir=None,
|
||||||
|
exporter_params={},
|
||||||
log_level=None,
|
log_level=None,
|
||||||
**kwargs):
|
**kwargs,
|
||||||
'''Run the simulation and yield the resulting environments.'''
|
):
|
||||||
|
"""Run the simulation and yield the resulting environments."""
|
||||||
if log_level:
|
if log_level:
|
||||||
logger.setLevel(log_level)
|
logger.setLevel(log_level)
|
||||||
logger.info('Using exporters: %s', exporters or [])
|
outdir = outdir or self.outdir
|
||||||
logger.info('Output directory: %s', outdir)
|
logger.info("Using exporters: %s", exporters or [])
|
||||||
exporters = serialization.deserialize_all(exporters,
|
logger.info("Output directory: %s", outdir)
|
||||||
|
if dry_run is None:
|
||||||
|
dry_run = self.dry_run
|
||||||
|
if exporters is None:
|
||||||
|
exporters = self.exporters
|
||||||
|
if not exporter_params:
|
||||||
|
exporter_params = self.exporter_params
|
||||||
|
|
||||||
|
exporters = serialization.deserialize_all(
|
||||||
|
exporters,
|
||||||
simulation=self,
|
simulation=self,
|
||||||
known_modules=['soil.exporters', ],
|
known_modules=[
|
||||||
|
"soil.exporters",
|
||||||
|
],
|
||||||
dry_run=dry_run,
|
dry_run=dry_run,
|
||||||
outdir=outdir,
|
outdir=outdir,
|
||||||
**exporter_params)
|
**exporter_params,
|
||||||
|
)
|
||||||
|
|
||||||
with utils.timer('simulation {}'.format(self.name)):
|
with utils.timer("simulation {}".format(self.name)):
|
||||||
for exporter in exporters:
|
for exporter in exporters:
|
||||||
exporter.sim_start()
|
exporter.sim_start()
|
||||||
|
|
||||||
for env in utils.run_parallel(func=self.run_trial,
|
for env in utils.run_parallel(
|
||||||
|
func=self.run_trial,
|
||||||
iterable=range(int(self.num_trials)),
|
iterable=range(int(self.num_trials)),
|
||||||
parallel=parallel,
|
parallel=parallel,
|
||||||
log_level=log_level,
|
log_level=log_level,
|
||||||
**kwargs):
|
**kwargs,
|
||||||
|
):
|
||||||
|
|
||||||
for exporter in exporters:
|
for exporter in exporters:
|
||||||
exporter.trial_start(env)
|
exporter.trial_start(env)
|
||||||
@@ -115,28 +144,36 @@ class Simulation:
|
|||||||
for exporter in exporters:
|
for exporter in exporters:
|
||||||
exporter.sim_end()
|
exporter.sim_end()
|
||||||
|
|
||||||
def get_env(self, trial_id=0, **kwargs):
|
def get_env(self, trial_id=0, model_params=None, **kwargs):
|
||||||
'''Create an environment for a trial of the simulation'''
|
"""Create an environment for a trial of the simulation"""
|
||||||
|
|
||||||
def deserialize_reporters(reporters):
|
def deserialize_reporters(reporters):
|
||||||
for (k, v) in reporters.items():
|
for (k, v) in reporters.items():
|
||||||
if isinstance(v, str) and v.startswith('py:'):
|
if isinstance(v, str) and v.startswith("py:"):
|
||||||
reporters[k] = serialization.deserialize(value.lsplit(':', 1)[1])
|
reporters[k] = serialization.deserialize(v.split(":", 1)[1])
|
||||||
|
return reporters
|
||||||
|
|
||||||
model_params = self.model_params.copy()
|
params = self.model_params.copy()
|
||||||
model_params.update(kwargs)
|
if model_params:
|
||||||
|
params.update(model_params)
|
||||||
|
params.update(kwargs)
|
||||||
|
|
||||||
agent_reporters = deserialize_reporters(model_params.pop('agent_reporters', {}))
|
agent_reporters = deserialize_reporters(params.pop("agent_reporters", {}))
|
||||||
model_reporters = deserialize_reporters(model_params.pop('model_reporters', {}))
|
model_reporters = deserialize_reporters(params.pop("model_reporters", {}))
|
||||||
|
|
||||||
env = serialization.deserialize(self.model_class)
|
env = serialization.deserialize(self.model_class)
|
||||||
return env(id=f'{self.name}_trial_{trial_id}',
|
return env(
|
||||||
seed=f'{self.seed}_trial_{trial_id}',
|
id=f"{self.name}_trial_{trial_id}",
|
||||||
|
seed=f"{self.seed}_trial_{trial_id}",
|
||||||
dir_path=self.dir_path,
|
dir_path=self.dir_path,
|
||||||
agent_reporters=agent_reporters,
|
agent_reporters=agent_reporters,
|
||||||
model_reporters=model_reporters,
|
model_reporters=model_reporters,
|
||||||
**model_params)
|
**params,
|
||||||
|
)
|
||||||
|
|
||||||
def run_trial(self, trial_id=None, until=None, log_file=False, log_level=logging.INFO, **opts):
|
def run_trial(
|
||||||
|
self, trial_id=None, until=None, log_file=False, log_level=logging.INFO, **opts
|
||||||
|
):
|
||||||
"""
|
"""
|
||||||
Run a single trial of the simulation
|
Run a single trial of the simulation
|
||||||
|
|
||||||
@@ -145,73 +182,83 @@ class Simulation:
|
|||||||
logger.setLevel(log_level)
|
logger.setLevel(log_level)
|
||||||
model = self.get_env(trial_id, **opts)
|
model = self.get_env(trial_id, **opts)
|
||||||
trial_id = trial_id if trial_id is not None else current_time()
|
trial_id = trial_id if trial_id is not None else current_time()
|
||||||
with utils.timer('Simulation {} trial {}'.format(self.name, trial_id)):
|
with utils.timer("Simulation {} trial {}".format(self.name, trial_id)):
|
||||||
return self.run_model(model=model, trial_id=trial_id, until=until, log_level=log_level)
|
return self.run_model(
|
||||||
|
model=model, trial_id=trial_id, until=until, log_level=log_level
|
||||||
|
)
|
||||||
|
|
||||||
def run_model(self, model, until=None, **opts):
|
def run_model(self, model, until=None, **opts):
|
||||||
# Set-up trial environment and graph
|
# Set-up trial environment and graph
|
||||||
until = float(until or self.max_time or 'inf')
|
until = float(until or self.max_time or "inf")
|
||||||
|
|
||||||
# Set up agents on nodes
|
# Set up agents on nodes
|
||||||
def is_done():
|
def is_done():
|
||||||
return False
|
return not model.running
|
||||||
|
|
||||||
if until and hasattr(model.schedule, 'time'):
|
if until and hasattr(model.schedule, "time"):
|
||||||
prev = is_done
|
prev = is_done
|
||||||
|
|
||||||
def is_done():
|
def is_done():
|
||||||
return prev() or model.schedule.time >= until
|
return prev() or model.schedule.time >= until
|
||||||
|
|
||||||
if self.max_steps and self.max_steps > 0 and hasattr(model.schedule, 'steps'):
|
if self.max_steps and self.max_steps > 0 and hasattr(model.schedule, "steps"):
|
||||||
prev_steps = is_done
|
prev_steps = is_done
|
||||||
|
|
||||||
def is_done():
|
def is_done():
|
||||||
return prev_steps() or model.schedule.steps >= self.max_steps
|
return prev_steps() or model.schedule.steps >= self.max_steps
|
||||||
|
|
||||||
newline = '\n'
|
newline = "\n"
|
||||||
logger.info(dedent(f'''
|
logger.info(
|
||||||
|
dedent(
|
||||||
|
f"""
|
||||||
Model stats:
|
Model stats:
|
||||||
Agents (total: { model.schedule.get_agent_count() }):
|
Agents (total: { model.schedule.get_agent_count() }):
|
||||||
- { (newline + ' - ').join(str(a) for a in model.schedule.agents) }'''
|
- { (newline + ' - ').join(str(a) for a in model.schedule.agents) }
|
||||||
f'''
|
|
||||||
|
|
||||||
Topologies (size):
|
Topology size: { len(model.G) if hasattr(model, "G") else 0 }
|
||||||
- { dict( (k, len(v)) for (k, v) in model.topologies.items()) }
|
"""
|
||||||
''' if getattr(model, "topologies", None) else ''
|
)
|
||||||
))
|
)
|
||||||
|
|
||||||
while not is_done():
|
while not is_done():
|
||||||
utils.logger.debug(f'Simulation time {model.schedule.time}/{until}. Next: {getattr(model.schedule, "next_time", model.schedule.time + self.interval)}')
|
utils.logger.debug(
|
||||||
|
f'Simulation time {model.schedule.time}/{until}. Next: {getattr(model.schedule, "next_time", model.schedule.time + self.interval)}'
|
||||||
|
)
|
||||||
model.step()
|
model.step()
|
||||||
|
|
||||||
|
if (
|
||||||
|
model.schedule.time < until
|
||||||
|
): # Simulation ended (no more steps) before the expected time
|
||||||
|
model.schedule.time = until
|
||||||
return model
|
return model
|
||||||
|
|
||||||
def to_dict(self):
|
def to_dict(self):
|
||||||
d = asdict(self)
|
d = asdict(self)
|
||||||
if not isinstance(d['model_class'], str):
|
if not isinstance(d["model_class"], str):
|
||||||
d['model_class'] = serialization.name(d['model_class'])
|
d["model_class"] = serialization.name(d["model_class"])
|
||||||
d['model_params'] = serialization.serialize_dict(d['model_params'])
|
d["model_params"] = serialization.serialize_dict(d["model_params"])
|
||||||
d['dir_path'] = str(d['dir_path'])
|
d["dir_path"] = str(d["dir_path"])
|
||||||
d['version'] = '2'
|
d["version"] = "2"
|
||||||
return d
|
return d
|
||||||
|
|
||||||
def to_yaml(self):
|
def to_yaml(self):
|
||||||
return yaml.dump(self.to_dict())
|
return yaml.dump(self.to_dict())
|
||||||
|
|
||||||
|
|
||||||
def iter_from_config(*cfgs):
|
def iter_from_config(*cfgs, **kwargs):
|
||||||
for config in cfgs:
|
for config in cfgs:
|
||||||
configs = list(serialization.load_config(config))
|
configs = list(serialization.load_config(config))
|
||||||
for config, path in configs:
|
for config, path in configs:
|
||||||
d = dict(config)
|
d = dict(config)
|
||||||
if 'dir_path' not in d:
|
if "dir_path" not in d:
|
||||||
d['dir_path'] = os.path.dirname(path)
|
d["dir_path"] = os.path.dirname(path)
|
||||||
yield Simulation.from_dict(d)
|
yield Simulation.from_dict(d, **kwargs)
|
||||||
|
|
||||||
|
|
||||||
def from_config(conf_or_path):
|
def from_config(conf_or_path):
|
||||||
lst = list(iter_from_config(conf_or_path))
|
lst = list(iter_from_config(conf_or_path))
|
||||||
if len(lst) > 1:
|
if len(lst) > 1:
|
||||||
raise AttributeError('Provide only one configuration')
|
raise AttributeError("Provide only one configuration")
|
||||||
return lst[0]
|
return lst[0]
|
||||||
|
|
||||||
|
|
||||||
|
|||||||
170
soil/time.py
170
soil/time.py
@@ -2,11 +2,20 @@ from mesa.time import BaseScheduler
|
|||||||
from queue import Empty
|
from queue import Empty
|
||||||
from heapq import heappush, heappop, heapify
|
from heapq import heappush, heappop, heapify
|
||||||
import math
|
import math
|
||||||
|
|
||||||
|
from inspect import getsource
|
||||||
|
from numbers import Number
|
||||||
|
|
||||||
from .utils import logger
|
from .utils import logger
|
||||||
from mesa import Agent as MesaAgent
|
from mesa import Agent as MesaAgent
|
||||||
|
|
||||||
|
|
||||||
INFINITY = float('inf')
|
INFINITY = float("inf")
|
||||||
|
|
||||||
|
|
||||||
|
class DeadAgent(Exception):
|
||||||
|
pass
|
||||||
|
|
||||||
|
|
||||||
class When:
|
class When:
|
||||||
def __init__(self, time):
|
def __init__(self, time):
|
||||||
@@ -14,9 +23,66 @@ class When:
|
|||||||
return time
|
return time
|
||||||
self._time = time
|
self._time = time
|
||||||
|
|
||||||
def abs(self, time):
|
def next(self, time):
|
||||||
return self._time
|
return self._time
|
||||||
|
|
||||||
|
def abs(self, time):
|
||||||
|
return self
|
||||||
|
|
||||||
|
def __repr__(self):
|
||||||
|
return str(f"When({self._time})")
|
||||||
|
|
||||||
|
def __lt__(self, other):
|
||||||
|
if isinstance(other, Number):
|
||||||
|
return self._time < other
|
||||||
|
return self._time < other.next(self._time)
|
||||||
|
|
||||||
|
def __gt__(self, other):
|
||||||
|
if isinstance(other, Number):
|
||||||
|
return self._time > other
|
||||||
|
return self._time > other.next(self._time)
|
||||||
|
|
||||||
|
def ready(self, agent):
|
||||||
|
return self._time <= agent.model.schedule.time
|
||||||
|
|
||||||
|
def return_value(self, agent):
|
||||||
|
return None
|
||||||
|
|
||||||
|
|
||||||
|
class Cond(When):
|
||||||
|
def __init__(self, func, delta=1, return_func=lambda agent: None):
|
||||||
|
self._func = func
|
||||||
|
self._delta = delta
|
||||||
|
self._checked = False
|
||||||
|
self._return_func = return_func
|
||||||
|
|
||||||
|
def next(self, time):
|
||||||
|
if self._checked:
|
||||||
|
return time + self._delta
|
||||||
|
return time
|
||||||
|
|
||||||
|
def abs(self, time):
|
||||||
|
return self
|
||||||
|
|
||||||
|
def ready(self, agent):
|
||||||
|
self._checked = True
|
||||||
|
return self._func(agent)
|
||||||
|
|
||||||
|
def return_value(self, agent):
|
||||||
|
return self._return_func(agent)
|
||||||
|
|
||||||
|
def __eq__(self, other):
|
||||||
|
return False
|
||||||
|
|
||||||
|
def __lt__(self, other):
|
||||||
|
return True
|
||||||
|
|
||||||
|
def __gt__(self, other):
|
||||||
|
return False
|
||||||
|
|
||||||
|
def __repr__(self):
|
||||||
|
return str(f'Cond("{getsource(self._func)}")')
|
||||||
|
|
||||||
|
|
||||||
NEVER = When(INFINITY)
|
NEVER = When(INFINITY)
|
||||||
|
|
||||||
@@ -26,11 +92,19 @@ class Delta(When):
|
|||||||
self._delta = delta
|
self._delta = delta
|
||||||
|
|
||||||
def __eq__(self, other):
|
def __eq__(self, other):
|
||||||
|
if isinstance(other, Delta):
|
||||||
return self._delta == other._delta
|
return self._delta == other._delta
|
||||||
|
return False
|
||||||
|
|
||||||
def abs(self, time):
|
def abs(self, time):
|
||||||
|
return When(self._delta + time)
|
||||||
|
|
||||||
|
def next(self, time):
|
||||||
return time + self._delta
|
return time + self._delta
|
||||||
|
|
||||||
|
def __repr__(self):
|
||||||
|
return str(f"Delta({self._delta})")
|
||||||
|
|
||||||
|
|
||||||
class TimedActivation(BaseScheduler):
|
class TimedActivation(BaseScheduler):
|
||||||
"""A scheduler which activates each agent when the agent requests.
|
"""A scheduler which activates each agent when the agent requests.
|
||||||
@@ -42,18 +116,21 @@ class TimedActivation(BaseScheduler):
|
|||||||
self._next = {}
|
self._next = {}
|
||||||
self._queue = []
|
self._queue = []
|
||||||
self.next_time = 0
|
self.next_time = 0
|
||||||
self.logger = logger.getChild(f'time_{ self.model }')
|
self.logger = logger.getChild(f"time_{ self.model }")
|
||||||
|
|
||||||
def add(self, agent: MesaAgent, when=None):
|
def add(self, agent: MesaAgent, when=None):
|
||||||
if when is None:
|
if when is None:
|
||||||
when = self.time
|
when = When(self.time)
|
||||||
|
elif not isinstance(when, When):
|
||||||
|
when = When(when)
|
||||||
if agent.unique_id in self._agents:
|
if agent.unique_id in self._agents:
|
||||||
self._queue.remove((self._next[agent.unique_id], agent.unique_id))
|
|
||||||
del self._agents[agent.unique_id]
|
del self._agents[agent.unique_id]
|
||||||
|
if agent.unique_id in self._next:
|
||||||
|
self._queue.remove((self._next[agent.unique_id], agent))
|
||||||
heapify(self._queue)
|
heapify(self._queue)
|
||||||
|
|
||||||
heappush(self._queue, (when, agent.unique_id))
|
|
||||||
self._next[agent.unique_id] = when
|
self._next[agent.unique_id] = when
|
||||||
|
heappush(self._queue, (when, agent))
|
||||||
super().add(agent)
|
super().add(agent)
|
||||||
|
|
||||||
def step(self) -> None:
|
def step(self) -> None:
|
||||||
@@ -62,38 +139,77 @@ class TimedActivation(BaseScheduler):
|
|||||||
an agent will signal when it wants to be scheduled next.
|
an agent will signal when it wants to be scheduled next.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
self.logger.debug(f'Simulation step {self.next_time}')
|
self.logger.debug(f"Simulation step {self.time}")
|
||||||
if not self.model.running:
|
if not self.model.running:
|
||||||
return
|
return
|
||||||
|
|
||||||
self.time = self.next_time
|
when = NEVER
|
||||||
when = self.time
|
|
||||||
|
|
||||||
while self._queue and self._queue[0][0] == self.time:
|
to_process = []
|
||||||
(when, agent_id) = heappop(self._queue)
|
skipped = []
|
||||||
self.logger.debug(f'Stepping agent {agent_id}')
|
next_time = INFINITY
|
||||||
|
|
||||||
agent = self._agents[agent_id]
|
ix = 0
|
||||||
returned = agent.step()
|
|
||||||
|
|
||||||
if not agent.alive:
|
self.logger.debug(f"Queue length: {len(self._queue)}")
|
||||||
self.remove(agent)
|
|
||||||
|
while self._queue:
|
||||||
|
(when, agent) = self._queue[0]
|
||||||
|
if when > self.time:
|
||||||
|
break
|
||||||
|
heappop(self._queue)
|
||||||
|
if when.ready(agent):
|
||||||
|
try:
|
||||||
|
agent._last_return = when.return_value(agent)
|
||||||
|
except Exception as ex:
|
||||||
|
agent._last_except = ex
|
||||||
|
|
||||||
|
self._next.pop(agent.unique_id, None)
|
||||||
|
to_process.append(agent)
|
||||||
continue
|
continue
|
||||||
|
|
||||||
when = (returned or Delta(1)).abs(self.time)
|
next_time = min(next_time, when.next(self.time))
|
||||||
if when < self.time:
|
self._next[agent.unique_id] = next_time
|
||||||
raise Exception("Cannot schedule an agent for a time in the past ({} < {})".format(when, self.time))
|
skipped.append((when, agent))
|
||||||
|
|
||||||
self._next[agent_id] = when
|
if self._queue:
|
||||||
heappush(self._queue, (when, agent_id))
|
next_time = min(next_time, self._queue[0][0].next(self.time))
|
||||||
|
|
||||||
|
self._queue = [*skipped, *self._queue]
|
||||||
|
|
||||||
|
for agent in to_process:
|
||||||
|
self.logger.debug(f"Stepping agent {agent}")
|
||||||
|
|
||||||
|
try:
|
||||||
|
returned = ((agent.step() or Delta(1))).abs(self.time)
|
||||||
|
except DeadAgent:
|
||||||
|
if agent.unique_id in self._next:
|
||||||
|
del self._next[agent.unique_id]
|
||||||
|
agent.alive = False
|
||||||
|
continue
|
||||||
|
|
||||||
|
if not getattr(agent, "alive", True):
|
||||||
|
continue
|
||||||
|
|
||||||
|
value = returned.next(self.time)
|
||||||
|
agent._last_return = value
|
||||||
|
|
||||||
|
if value < self.time:
|
||||||
|
raise Exception(
|
||||||
|
f"Cannot schedule an agent for a time in the past ({when} < {self.time})"
|
||||||
|
)
|
||||||
|
if value < INFINITY:
|
||||||
|
next_time = min(value, next_time)
|
||||||
|
|
||||||
|
self._next[agent.unique_id] = returned
|
||||||
|
heappush(self._queue, (returned, agent))
|
||||||
|
else:
|
||||||
|
assert not self._next[agent.unique_id]
|
||||||
|
|
||||||
self.steps += 1
|
self.steps += 1
|
||||||
|
self.logger.debug(f"Updating time step: {self.time} -> {next_time}")
|
||||||
|
self.time = next_time
|
||||||
|
|
||||||
if not self._queue:
|
if not self._queue or next_time == INFINITY:
|
||||||
self.time = INFINITY
|
|
||||||
self.next_time = INFINITY
|
|
||||||
self.model.running = False
|
self.model.running = False
|
||||||
return self.time
|
return self.time
|
||||||
|
|
||||||
self.next_time = self._queue[0][0]
|
|
||||||
self.logger.debug(f'Next step: {self.next_time}')
|
|
||||||
|
|||||||
@@ -4,57 +4,75 @@ import os
|
|||||||
import traceback
|
import traceback
|
||||||
|
|
||||||
from functools import partial
|
from functools import partial
|
||||||
from shutil import copyfile
|
from shutil import copyfile, move
|
||||||
from multiprocessing import Pool
|
from multiprocessing import Pool
|
||||||
|
|
||||||
from contextlib import contextmanager
|
from contextlib import contextmanager
|
||||||
|
|
||||||
logger = logging.getLogger('soil')
|
logger = logging.getLogger("soil")
|
||||||
logger.setLevel(logging.INFO)
|
logger.setLevel(logging.INFO)
|
||||||
|
|
||||||
timeformat = "%H:%M:%S"
|
timeformat = "%H:%M:%S"
|
||||||
|
|
||||||
if os.environ.get('SOIL_VERBOSE', ''):
|
if os.environ.get("SOIL_VERBOSE", ""):
|
||||||
logformat = "[%(levelname)-5.5s][%(asctime)s][%(name)s]: %(message)s"
|
logformat = "[%(levelname)-5.5s][%(asctime)s][%(name)s]: %(message)s"
|
||||||
else:
|
else:
|
||||||
logformat = "[%(levelname)-5.5s][%(asctime)s] %(message)s"
|
logformat = "[%(levelname)-5.5s][%(asctime)s] %(message)s"
|
||||||
|
|
||||||
logFormatter = logging.Formatter(logformat, timeformat)
|
logFormatter = logging.Formatter(logformat, timeformat)
|
||||||
|
|
||||||
consoleHandler = logging.StreamHandler()
|
consoleHandler = logging.StreamHandler()
|
||||||
consoleHandler.setFormatter(logFormatter)
|
consoleHandler.setFormatter(logFormatter)
|
||||||
logger.addHandler(consoleHandler)
|
|
||||||
|
logging.basicConfig(
|
||||||
|
level=logging.INFO,
|
||||||
|
handlers=[
|
||||||
|
consoleHandler,
|
||||||
|
],
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
@contextmanager
|
@contextmanager
|
||||||
def timer(name='task', pre="", function=logger.info, to_object=None):
|
def timer(name="task", pre="", function=logger.info, to_object=None):
|
||||||
start = current_time()
|
start = current_time()
|
||||||
function('{}Starting {} at {}.'.format(pre, name,
|
function("{}Starting {} at {}.".format(pre, name, strftime("%X", gmtime(start))))
|
||||||
strftime("%X", gmtime(start))))
|
|
||||||
yield start
|
yield start
|
||||||
end = current_time()
|
end = current_time()
|
||||||
function('{}Finished {} at {} in {} seconds'.format(pre, name,
|
function(
|
||||||
strftime("%X", gmtime(end)),
|
"{}Finished {} at {} in {} seconds".format(
|
||||||
str(end-start)))
|
pre, name, strftime("%X", gmtime(end)), str(end - start)
|
||||||
|
)
|
||||||
|
)
|
||||||
if to_object:
|
if to_object:
|
||||||
to_object.start = start
|
to_object.start = start
|
||||||
to_object.end = end
|
to_object.end = end
|
||||||
|
|
||||||
|
|
||||||
def safe_open(path, mode='r', backup=True, **kwargs):
|
def try_backup(path, remove=False):
|
||||||
|
if not os.path.exists(path):
|
||||||
|
return None
|
||||||
outdir = os.path.dirname(path)
|
outdir = os.path.dirname(path)
|
||||||
if outdir and not os.path.exists(outdir):
|
if outdir and not os.path.exists(outdir):
|
||||||
os.makedirs(outdir)
|
os.makedirs(outdir)
|
||||||
if backup and 'w' in mode and os.path.exists(path):
|
|
||||||
creation = os.path.getctime(path)
|
creation = os.path.getctime(path)
|
||||||
stamp = strftime('%Y-%m-%d_%H.%M.%S', localtime(creation))
|
stamp = strftime("%Y-%m-%d_%H.%M.%S", localtime(creation))
|
||||||
|
|
||||||
backup_dir = os.path.join(outdir, 'backup')
|
backup_dir = os.path.join(outdir, "backup")
|
||||||
if not os.path.exists(backup_dir):
|
if not os.path.exists(backup_dir):
|
||||||
os.makedirs(backup_dir)
|
os.makedirs(backup_dir)
|
||||||
newpath = os.path.join(backup_dir, '{}@{}'.format(os.path.basename(path),
|
newpath = os.path.join(backup_dir, "{}@{}".format(os.path.basename(path), stamp))
|
||||||
stamp))
|
if move:
|
||||||
|
move(path, newpath)
|
||||||
|
else:
|
||||||
copyfile(path, newpath)
|
copyfile(path, newpath)
|
||||||
|
return newpath
|
||||||
|
|
||||||
|
|
||||||
|
def safe_open(path, mode="r", backup=True, **kwargs):
|
||||||
|
outdir = os.path.dirname(path)
|
||||||
|
if outdir and not os.path.exists(outdir):
|
||||||
|
os.makedirs(outdir)
|
||||||
|
if backup and "w" in mode:
|
||||||
|
try_backup(path)
|
||||||
return open(path, mode=mode, **kwargs)
|
return open(path, mode=mode, **kwargs)
|
||||||
|
|
||||||
|
|
||||||
@@ -63,24 +81,26 @@ def open_or_reuse(f, *args, **kwargs):
|
|||||||
try:
|
try:
|
||||||
with safe_open(f, *args, **kwargs) as f:
|
with safe_open(f, *args, **kwargs) as f:
|
||||||
yield f
|
yield f
|
||||||
except (AttributeError, TypeError):
|
except (AttributeError, TypeError) as ex:
|
||||||
yield f
|
yield f
|
||||||
|
|
||||||
|
|
||||||
def flatten_dict(d):
|
def flatten_dict(d):
|
||||||
if not isinstance(d, dict):
|
if not isinstance(d, dict):
|
||||||
return d
|
return d
|
||||||
return dict(_flatten_dict(d))
|
return dict(_flatten_dict(d))
|
||||||
|
|
||||||
def _flatten_dict(d, prefix=''):
|
|
||||||
|
def _flatten_dict(d, prefix=""):
|
||||||
if not isinstance(d, dict):
|
if not isinstance(d, dict):
|
||||||
# print('END:', prefix, d)
|
# print('END:', prefix, d)
|
||||||
yield prefix, d
|
yield prefix, d
|
||||||
return
|
return
|
||||||
if prefix:
|
if prefix:
|
||||||
prefix = prefix + '.'
|
prefix = prefix + "."
|
||||||
for k, v in d.items():
|
for k, v in d.items():
|
||||||
# print(k, v)
|
# print(k, v)
|
||||||
res = list(_flatten_dict(v, prefix='{}{}'.format(prefix, k)))
|
res = list(_flatten_dict(v, prefix="{}{}".format(prefix, k)))
|
||||||
# print('RES:', res)
|
# print('RES:', res)
|
||||||
yield from res
|
yield from res
|
||||||
|
|
||||||
@@ -92,7 +112,7 @@ def unflatten_dict(d):
|
|||||||
if not isinstance(k, str):
|
if not isinstance(k, str):
|
||||||
target[k] = v
|
target[k] = v
|
||||||
continue
|
continue
|
||||||
tokens = k.split('.')
|
tokens = k.split(".")
|
||||||
if len(tokens) < 2:
|
if len(tokens) < 2:
|
||||||
target[k] = v
|
target[k] = v
|
||||||
continue
|
continue
|
||||||
@@ -105,27 +125,28 @@ def unflatten_dict(d):
|
|||||||
|
|
||||||
|
|
||||||
def run_and_return_exceptions(func, *args, **kwargs):
|
def run_and_return_exceptions(func, *args, **kwargs):
|
||||||
'''
|
"""
|
||||||
A wrapper for run_trial that catches exceptions and returns them.
|
A wrapper for run_trial that catches exceptions and returns them.
|
||||||
It is meant for async simulations.
|
It is meant for async simulations.
|
||||||
'''
|
"""
|
||||||
try:
|
try:
|
||||||
return func(*args, **kwargs)
|
return func(*args, **kwargs)
|
||||||
except Exception as ex:
|
except Exception as ex:
|
||||||
if ex.__cause__ is not None:
|
if ex.__cause__ is not None:
|
||||||
ex = ex.__cause__
|
ex = ex.__cause__
|
||||||
ex.message = ''.join(traceback.format_exception(type(ex), ex, ex.__traceback__)[:])
|
ex.message = "".join(
|
||||||
|
traceback.format_exception(type(ex), ex, ex.__traceback__)[:]
|
||||||
|
)
|
||||||
return ex
|
return ex
|
||||||
|
|
||||||
|
|
||||||
def run_parallel(func, iterable, parallel=False, **kwargs):
|
def run_parallel(func, iterable, parallel=False, **kwargs):
|
||||||
if parallel and not os.environ.get('SOIL_DEBUG', None):
|
if parallel and not os.environ.get("SOIL_DEBUG", None):
|
||||||
p = Pool()
|
p = Pool()
|
||||||
wrapped_func = partial(run_and_return_exceptions,
|
wrapped_func = partial(run_and_return_exceptions, func, **kwargs)
|
||||||
func, **kwargs)
|
|
||||||
for i in p.imap_unordered(wrapped_func, iterable):
|
for i in p.imap_unordered(wrapped_func, iterable):
|
||||||
if isinstance(i, Exception):
|
if isinstance(i, Exception):
|
||||||
logger.error('Trial failed:\n\t%s', i.message)
|
logger.error("Trial failed:\n\t%s", i.message)
|
||||||
continue
|
continue
|
||||||
yield i
|
yield i
|
||||||
else:
|
else:
|
||||||
|
|||||||
@@ -4,7 +4,7 @@ import logging
|
|||||||
logger = logging.getLogger(__name__)
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
ROOT = os.path.dirname(__file__)
|
ROOT = os.path.dirname(__file__)
|
||||||
DEFAULT_FILE = os.path.join(ROOT, 'VERSION')
|
DEFAULT_FILE = os.path.join(ROOT, "VERSION")
|
||||||
|
|
||||||
|
|
||||||
def read_version(versionfile=DEFAULT_FILE):
|
def read_version(versionfile=DEFAULT_FILE):
|
||||||
@@ -12,9 +12,10 @@ def read_version(versionfile=DEFAULT_FILE):
|
|||||||
with open(versionfile) as f:
|
with open(versionfile) as f:
|
||||||
return f.read().strip()
|
return f.read().strip()
|
||||||
except IOError: # pragma: no cover
|
except IOError: # pragma: no cover
|
||||||
logger.error(('Running an unknown version of {}.'
|
logger.error(
|
||||||
'Be careful!.').format(__name__))
|
("Running an unknown version of {}." "Be careful!.").format(__name__)
|
||||||
return '0.0'
|
)
|
||||||
|
return "0.0"
|
||||||
|
|
||||||
|
|
||||||
__version__ = read_version()
|
__version__ = read_version()
|
||||||
|
|||||||
@@ -1,5 +1,6 @@
|
|||||||
from mesa.visualization.UserParam import UserSettableParameter
|
from mesa.visualization.UserParam import UserSettableParameter
|
||||||
|
|
||||||
|
|
||||||
class UserSettableParameter(UserSettableParameter):
|
class UserSettableParameter(UserSettableParameter):
|
||||||
def __str__(self):
|
def __str__(self):
|
||||||
return self.value
|
return self.value
|
||||||
|
|||||||
@@ -20,6 +20,7 @@ from tornado.concurrent import run_on_executor
|
|||||||
from concurrent.futures import ThreadPoolExecutor
|
from concurrent.futures import ThreadPoolExecutor
|
||||||
|
|
||||||
from ..simulation import Simulation
|
from ..simulation import Simulation
|
||||||
|
|
||||||
logger = logging.getLogger(__name__)
|
logger = logging.getLogger(__name__)
|
||||||
logger.setLevel(logging.INFO)
|
logger.setLevel(logging.INFO)
|
||||||
|
|
||||||
@@ -31,21 +32,24 @@ LOGGING_INTERVAL = 0.5
|
|||||||
# Workaround to let Soil load the required modules
|
# Workaround to let Soil load the required modules
|
||||||
sys.path.append(ROOT)
|
sys.path.append(ROOT)
|
||||||
|
|
||||||
|
|
||||||
class PageHandler(tornado.web.RequestHandler):
|
class PageHandler(tornado.web.RequestHandler):
|
||||||
"""Handler for the HTML template which holds the visualization."""
|
"""Handler for the HTML template which holds the visualization."""
|
||||||
|
|
||||||
def get(self):
|
def get(self):
|
||||||
self.render('index.html', port=self.application.port,
|
self.render(
|
||||||
name=self.application.name)
|
"index.html", port=self.application.port, name=self.application.name
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
class SocketHandler(tornado.websocket.WebSocketHandler):
|
class SocketHandler(tornado.websocket.WebSocketHandler):
|
||||||
"""Handler for websocket."""
|
"""Handler for websocket."""
|
||||||
|
|
||||||
executor = ThreadPoolExecutor(max_workers=MAX_WORKERS)
|
executor = ThreadPoolExecutor(max_workers=MAX_WORKERS)
|
||||||
|
|
||||||
def open(self):
|
def open(self):
|
||||||
if self.application.verbose:
|
if self.application.verbose:
|
||||||
logger.info('Socket opened!')
|
logger.info("Socket opened!")
|
||||||
|
|
||||||
def check_origin(self, origin):
|
def check_origin(self, origin):
|
||||||
return True
|
return True
|
||||||
@@ -55,116 +59,156 @@ class SocketHandler(tornado.websocket.WebSocketHandler):
|
|||||||
|
|
||||||
msg = tornado.escape.json_decode(message)
|
msg = tornado.escape.json_decode(message)
|
||||||
|
|
||||||
if msg['type'] == 'config_file':
|
if msg["type"] == "config_file":
|
||||||
|
|
||||||
if self.application.verbose:
|
if self.application.verbose:
|
||||||
print(msg['data'])
|
print(msg["data"])
|
||||||
|
|
||||||
self.config = list(yaml.load_all(msg['data']))
|
self.config = list(yaml.load_all(msg["data"]))
|
||||||
|
|
||||||
if len(self.config) > 1:
|
if len(self.config) > 1:
|
||||||
error = 'Please, provide only one configuration.'
|
error = "Please, provide only one configuration."
|
||||||
if self.application.verbose:
|
if self.application.verbose:
|
||||||
logger.error(error)
|
logger.error(error)
|
||||||
self.write_message({'type': 'error',
|
self.write_message({"type": "error", "error": error})
|
||||||
'error': error})
|
|
||||||
return
|
return
|
||||||
|
|
||||||
self.config = self.config[0]
|
self.config = self.config[0]
|
||||||
self.send_log('INFO.' + self.simulation_name,
|
self.send_log(
|
||||||
'Using config: {name}'.format(name=self.config['name']))
|
"INFO." + self.simulation_name,
|
||||||
|
"Using config: {name}".format(name=self.config["name"]),
|
||||||
|
)
|
||||||
|
|
||||||
if 'visualization_params' in self.config:
|
if "visualization_params" in self.config:
|
||||||
self.write_message({'type': 'visualization_params',
|
self.write_message(
|
||||||
'data': self.config['visualization_params']})
|
{
|
||||||
self.name = self.config['name']
|
"type": "visualization_params",
|
||||||
|
"data": self.config["visualization_params"],
|
||||||
|
}
|
||||||
|
)
|
||||||
|
self.name = self.config["name"]
|
||||||
self.run_simulation()
|
self.run_simulation()
|
||||||
|
|
||||||
settings = []
|
settings = []
|
||||||
for key in self.config['environment_params']:
|
for key in self.config["environment_params"]:
|
||||||
if type(self.config['environment_params'][key]) == float or type(self.config['environment_params'][key]) == int:
|
if (
|
||||||
if self.config['environment_params'][key] <= 1:
|
type(self.config["environment_params"][key]) == float
|
||||||
setting_type = 'number'
|
or type(self.config["environment_params"][key]) == int
|
||||||
|
):
|
||||||
|
if self.config["environment_params"][key] <= 1:
|
||||||
|
setting_type = "number"
|
||||||
else:
|
else:
|
||||||
setting_type = 'great_number'
|
setting_type = "great_number"
|
||||||
elif type(self.config['environment_params'][key]) == bool:
|
elif type(self.config["environment_params"][key]) == bool:
|
||||||
setting_type = 'boolean'
|
setting_type = "boolean"
|
||||||
else:
|
else:
|
||||||
setting_type = 'undefined'
|
setting_type = "undefined"
|
||||||
|
|
||||||
settings.append({
|
settings.append(
|
||||||
'label': key,
|
{
|
||||||
'type': setting_type,
|
"label": key,
|
||||||
'value': self.config['environment_params'][key]
|
"type": setting_type,
|
||||||
})
|
"value": self.config["environment_params"][key],
|
||||||
|
}
|
||||||
|
)
|
||||||
|
|
||||||
self.write_message({'type': 'settings',
|
self.write_message({"type": "settings", "data": settings})
|
||||||
'data': settings})
|
|
||||||
|
|
||||||
elif msg['type'] == 'get_trial':
|
elif msg["type"] == "get_trial":
|
||||||
if self.application.verbose:
|
if self.application.verbose:
|
||||||
logger.info('Trial {} requested!'.format(msg['data']))
|
logger.info("Trial {} requested!".format(msg["data"]))
|
||||||
self.send_log('INFO.' + __name__, 'Trial {} requested!'.format(msg['data']))
|
self.send_log("INFO." + __name__, "Trial {} requested!".format(msg["data"]))
|
||||||
self.write_message({'type': 'get_trial',
|
self.write_message(
|
||||||
'data': self.get_trial(int(msg['data']))})
|
{"type": "get_trial", "data": self.get_trial(int(msg["data"]))}
|
||||||
|
)
|
||||||
|
|
||||||
elif msg['type'] == 'run_simulation':
|
elif msg["type"] == "run_simulation":
|
||||||
if self.application.verbose:
|
if self.application.verbose:
|
||||||
logger.info('Running new simulation for {name}'.format(name=self.config['name']))
|
logger.info(
|
||||||
self.send_log('INFO.' + self.simulation_name, 'Running new simulation for {name}'.format(name=self.config['name']))
|
"Running new simulation for {name}".format(name=self.config["name"])
|
||||||
self.config['environment_params'] = msg['data']
|
)
|
||||||
|
self.send_log(
|
||||||
|
"INFO." + self.simulation_name,
|
||||||
|
"Running new simulation for {name}".format(name=self.config["name"]),
|
||||||
|
)
|
||||||
|
self.config["environment_params"] = msg["data"]
|
||||||
self.run_simulation()
|
self.run_simulation()
|
||||||
|
|
||||||
elif msg['type'] == 'download_gexf':
|
elif msg["type"] == "download_gexf":
|
||||||
G = self.trials[ int(msg['data']) ].history_to_graph()
|
G = self.trials[int(msg["data"])].history_to_graph()
|
||||||
for node in G.nodes():
|
for node in G.nodes():
|
||||||
if 'pos' in G.nodes[node]:
|
if "pos" in G.nodes[node]:
|
||||||
G.nodes[node]['viz'] = {"position": {"x": G.nodes[node]['pos'][0], "y": G.nodes[node]['pos'][1], "z": 0.0}}
|
G.nodes[node]["viz"] = {
|
||||||
del (G.nodes[node]['pos'])
|
"position": {
|
||||||
writer = nx.readwrite.gexf.GEXFWriter(version='1.2draft')
|
"x": G.nodes[node]["pos"][0],
|
||||||
|
"y": G.nodes[node]["pos"][1],
|
||||||
|
"z": 0.0,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
del G.nodes[node]["pos"]
|
||||||
|
writer = nx.readwrite.gexf.GEXFWriter(version="1.2draft")
|
||||||
writer.add_graph(G)
|
writer.add_graph(G)
|
||||||
self.write_message({'type': 'download_gexf',
|
self.write_message(
|
||||||
'filename': self.config['name'] + '_trial_' + str(msg['data']),
|
{
|
||||||
'data': tostring(writer.xml).decode(writer.encoding) })
|
"type": "download_gexf",
|
||||||
|
"filename": self.config["name"] + "_trial_" + str(msg["data"]),
|
||||||
|
"data": tostring(writer.xml).decode(writer.encoding),
|
||||||
|
}
|
||||||
|
)
|
||||||
|
|
||||||
elif msg['type'] == 'download_json':
|
elif msg["type"] == "download_json":
|
||||||
G = self.trials[ int(msg['data']) ].history_to_graph()
|
G = self.trials[int(msg["data"])].history_to_graph()
|
||||||
for node in G.nodes():
|
for node in G.nodes():
|
||||||
if 'pos' in G.nodes[node]:
|
if "pos" in G.nodes[node]:
|
||||||
G.nodes[node]['viz'] = {"position": {"x": G.nodes[node]['pos'][0], "y": G.nodes[node]['pos'][1], "z": 0.0}}
|
G.nodes[node]["viz"] = {
|
||||||
del (G.nodes[node]['pos'])
|
"position": {
|
||||||
self.write_message({'type': 'download_json',
|
"x": G.nodes[node]["pos"][0],
|
||||||
'filename': self.config['name'] + '_trial_' + str(msg['data']),
|
"y": G.nodes[node]["pos"][1],
|
||||||
'data': nx.node_link_data(G) })
|
"z": 0.0,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
del G.nodes[node]["pos"]
|
||||||
|
self.write_message(
|
||||||
|
{
|
||||||
|
"type": "download_json",
|
||||||
|
"filename": self.config["name"] + "_trial_" + str(msg["data"]),
|
||||||
|
"data": nx.node_link_data(G),
|
||||||
|
}
|
||||||
|
)
|
||||||
|
|
||||||
else:
|
else:
|
||||||
if self.application.verbose:
|
if self.application.verbose:
|
||||||
logger.info('Unexpected message!')
|
logger.info("Unexpected message!")
|
||||||
|
|
||||||
def update_logging(self):
|
def update_logging(self):
|
||||||
try:
|
try:
|
||||||
if (not self.log_capture_string.closed and self.log_capture_string.getvalue()):
|
if (
|
||||||
for i in range(len(self.log_capture_string.getvalue().split('\n')) - 1):
|
not self.log_capture_string.closed
|
||||||
self.send_log('INFO.' + self.simulation_name, self.log_capture_string.getvalue().split('\n')[i])
|
and self.log_capture_string.getvalue()
|
||||||
|
):
|
||||||
|
for i in range(len(self.log_capture_string.getvalue().split("\n")) - 1):
|
||||||
|
self.send_log(
|
||||||
|
"INFO." + self.simulation_name,
|
||||||
|
self.log_capture_string.getvalue().split("\n")[i],
|
||||||
|
)
|
||||||
self.log_capture_string.truncate(0)
|
self.log_capture_string.truncate(0)
|
||||||
self.log_capture_string.seek(0)
|
self.log_capture_string.seek(0)
|
||||||
finally:
|
finally:
|
||||||
if self.capture_logging:
|
if self.capture_logging:
|
||||||
tornado.ioloop.IOLoop.current().call_later(LOGGING_INTERVAL, self.update_logging)
|
tornado.ioloop.IOLoop.current().call_later(
|
||||||
|
LOGGING_INTERVAL, self.update_logging
|
||||||
|
)
|
||||||
|
|
||||||
def on_close(self):
|
def on_close(self):
|
||||||
if self.application.verbose:
|
if self.application.verbose:
|
||||||
logger.info('Socket closed!')
|
logger.info("Socket closed!")
|
||||||
|
|
||||||
def send_log(self, logger, logging):
|
def send_log(self, logger, logging):
|
||||||
self.write_message({'type': 'log',
|
self.write_message({"type": "log", "logger": logger, "logging": logging})
|
||||||
'logger': logger,
|
|
||||||
'logging': logging})
|
|
||||||
|
|
||||||
@property
|
@property
|
||||||
def simulation_name(self):
|
def simulation_name(self):
|
||||||
return self.config.get('name', 'NoSimulationRunning')
|
return self.config.get("name", "NoSimulationRunning")
|
||||||
|
|
||||||
@run_on_executor
|
@run_on_executor
|
||||||
def nonblocking(self, config):
|
def nonblocking(self, config):
|
||||||
@@ -174,28 +218,31 @@ class SocketHandler(tornado.websocket.WebSocketHandler):
|
|||||||
@tornado.gen.coroutine
|
@tornado.gen.coroutine
|
||||||
def run_simulation(self):
|
def run_simulation(self):
|
||||||
# Run simulation and capture logs
|
# Run simulation and capture logs
|
||||||
logger.info('Running simulation!')
|
logger.info("Running simulation!")
|
||||||
if 'visualization_params' in self.config:
|
if "visualization_params" in self.config:
|
||||||
del self.config['visualization_params']
|
del self.config["visualization_params"]
|
||||||
with self.logging(self.simulation_name):
|
with self.logging(self.simulation_name):
|
||||||
try:
|
try:
|
||||||
config = dict(**self.config)
|
config = dict(**self.config)
|
||||||
config['outdir'] = os.path.join(self.application.outdir, config['name'])
|
config["outdir"] = os.path.join(self.application.outdir, config["name"])
|
||||||
config['dump'] = self.application.dump
|
config["dump"] = self.application.dump
|
||||||
self.trials = yield self.nonblocking(config)
|
self.trials = yield self.nonblocking(config)
|
||||||
|
|
||||||
self.write_message({'type': 'trials',
|
self.write_message(
|
||||||
'data': list(trial.name for trial in self.trials) })
|
{
|
||||||
|
"type": "trials",
|
||||||
|
"data": list(trial.name for trial in self.trials),
|
||||||
|
}
|
||||||
|
)
|
||||||
except Exception as ex:
|
except Exception as ex:
|
||||||
error = 'Something went wrong:\n\t{}'.format(ex)
|
error = "Something went wrong:\n\t{}".format(ex)
|
||||||
logging.info(error)
|
logging.info(error)
|
||||||
self.write_message({'type': 'error',
|
self.write_message({"type": "error", "error": error})
|
||||||
'error': error})
|
self.send_log("ERROR." + self.simulation_name, error)
|
||||||
self.send_log('ERROR.' + self.simulation_name, error)
|
|
||||||
|
|
||||||
def get_trial(self, trial):
|
def get_trial(self, trial):
|
||||||
logger.info('Available trials: %s ' % len(self.trials))
|
logger.info("Available trials: %s " % len(self.trials))
|
||||||
logger.info('Ask for : %s' % trial)
|
logger.info("Ask for : %s" % trial)
|
||||||
trial = self.trials[trial]
|
trial = self.trials[trial]
|
||||||
G = trial.history_to_graph()
|
G = trial.history_to_graph()
|
||||||
return nx.node_link_data(G)
|
return nx.node_link_data(G)
|
||||||
@@ -221,18 +268,21 @@ class ModularServer(tornado.web.Application):
|
|||||||
"""Main visualization application."""
|
"""Main visualization application."""
|
||||||
|
|
||||||
port = 8001
|
port = 8001
|
||||||
page_handler = (r'/', PageHandler)
|
page_handler = (r"/", PageHandler)
|
||||||
socket_handler = (r'/ws', SocketHandler)
|
socket_handler = (r"/ws", SocketHandler)
|
||||||
static_handler = (r'/(.*)', tornado.web.StaticFileHandler,
|
static_handler = (
|
||||||
{'path': os.path.join(ROOT, 'static')})
|
r"/(.*)",
|
||||||
local_handler = (r'/local/(.*)', tornado.web.StaticFileHandler,
|
tornado.web.StaticFileHandler,
|
||||||
{'path': ''})
|
{"path": os.path.join(ROOT, "static")},
|
||||||
|
)
|
||||||
|
local_handler = (r"/local/(.*)", tornado.web.StaticFileHandler, {"path": ""})
|
||||||
|
|
||||||
handlers = [page_handler, socket_handler, static_handler, local_handler]
|
handlers = [page_handler, socket_handler, static_handler, local_handler]
|
||||||
settings = {'debug': True,
|
settings = {"debug": True, "template_path": ROOT + "/templates"}
|
||||||
'template_path': ROOT + '/templates'}
|
|
||||||
|
|
||||||
def __init__(self, dump=False, outdir='output', name='SOIL', verbose=True, *args, **kwargs):
|
def __init__(
|
||||||
|
self, dump=False, outdir="output", name="SOIL", verbose=True, *args, **kwargs
|
||||||
|
):
|
||||||
|
|
||||||
self.verbose = verbose
|
self.verbose = verbose
|
||||||
self.name = name
|
self.name = name
|
||||||
@@ -247,8 +297,8 @@ class ModularServer(tornado.web.Application):
|
|||||||
|
|
||||||
if port is not None:
|
if port is not None:
|
||||||
self.port = port
|
self.port = port
|
||||||
url = 'http://127.0.0.1:{PORT}'.format(PORT=self.port)
|
url = "http://127.0.0.1:{PORT}".format(PORT=self.port)
|
||||||
print('Interface starting at {url}'.format(url=url))
|
print("Interface starting at {url}".format(url=url))
|
||||||
self.listen(self.port)
|
self.listen(self.port)
|
||||||
# webbrowser.open(url)
|
# webbrowser.open(url)
|
||||||
tornado.ioloop.IOLoop.instance().start()
|
tornado.ioloop.IOLoop.instance().start()
|
||||||
@@ -263,12 +313,22 @@ def run(*args, **kwargs):
|
|||||||
def main():
|
def main():
|
||||||
import argparse
|
import argparse
|
||||||
|
|
||||||
parser = argparse.ArgumentParser(description='Visualization of a Graph Model')
|
parser = argparse.ArgumentParser(description="Visualization of a Graph Model")
|
||||||
|
|
||||||
parser.add_argument('--name', '-n', nargs=1, default='SOIL', help='name of the simulation')
|
parser.add_argument(
|
||||||
parser.add_argument('--dump', '-d', help='dumping results in folder output', action='store_true')
|
"--name", "-n", nargs=1, default="SOIL", help="name of the simulation"
|
||||||
parser.add_argument('--port', '-p', nargs=1, default=8001, help='port for launching the server')
|
)
|
||||||
parser.add_argument('--verbose', '-v', help='verbose mode', action='store_true')
|
parser.add_argument(
|
||||||
|
"--dump", "-d", help="dumping results in folder output", action="store_true"
|
||||||
|
)
|
||||||
|
parser.add_argument(
|
||||||
|
"--port", "-p", nargs=1, default=8001, help="port for launching the server"
|
||||||
|
)
|
||||||
|
parser.add_argument("--verbose", "-v", help="verbose mode", action="store_true")
|
||||||
args = parser.parse_args()
|
args = parser.parse_args()
|
||||||
|
|
||||||
run(name=args.name, port=(args.port[0] if isinstance(args.port, list) else args.port), verbose=args.verbose)
|
run(
|
||||||
|
name=args.name,
|
||||||
|
port=(args.port[0] if isinstance(args.port, list) else args.port),
|
||||||
|
verbose=args.verbose,
|
||||||
|
)
|
||||||
|
|||||||
@@ -4,20 +4,33 @@ from simulator import Simulator
|
|||||||
|
|
||||||
|
|
||||||
def run(simulator, name="SOIL", port=8001, verbose=False):
|
def run(simulator, name="SOIL", port=8001, verbose=False):
|
||||||
server = ModularServer(simulator, name=(name[0] if isinstance(name, list) else name), verbose=verbose)
|
server = ModularServer(
|
||||||
|
simulator, name=(name[0] if isinstance(name, list) else name), verbose=verbose
|
||||||
|
)
|
||||||
server.port = port
|
server.port = port
|
||||||
server.launch()
|
server.launch()
|
||||||
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
if __name__ == "__main__":
|
||||||
|
|
||||||
parser = argparse.ArgumentParser(description='Visualization of a Graph Model')
|
parser = argparse.ArgumentParser(description="Visualization of a Graph Model")
|
||||||
|
|
||||||
parser.add_argument('--name', '-n', nargs=1, default='SOIL', help='name of the simulation')
|
parser.add_argument(
|
||||||
parser.add_argument('--dump', '-d', help='dumping results in folder output', action='store_true')
|
"--name", "-n", nargs=1, default="SOIL", help="name of the simulation"
|
||||||
parser.add_argument('--port', '-p', nargs=1, default=8001, help='port for launching the server')
|
)
|
||||||
parser.add_argument('--verbose', '-v', help='verbose mode', action='store_true')
|
parser.add_argument(
|
||||||
|
"--dump", "-d", help="dumping results in folder output", action="store_true"
|
||||||
|
)
|
||||||
|
parser.add_argument(
|
||||||
|
"--port", "-p", nargs=1, default=8001, help="port for launching the server"
|
||||||
|
)
|
||||||
|
parser.add_argument("--verbose", "-v", help="verbose mode", action="store_true")
|
||||||
args = parser.parse_args()
|
args = parser.parse_args()
|
||||||
|
|
||||||
soil = Simulator(dump=args.dump)
|
soil = Simulator(dump=args.dump)
|
||||||
run(soil, name=args.name, port=(args.port[0] if isinstance(args.port, list) else args.port), verbose=args.verbose)
|
run(
|
||||||
|
soil,
|
||||||
|
name=args.name,
|
||||||
|
port=(args.port[0] if isinstance(args.port, list) else args.port),
|
||||||
|
verbose=args.verbose,
|
||||||
|
)
|
||||||
|
|||||||
@@ -9,8 +9,7 @@ interval: 1
|
|||||||
seed: "CompleteSeed!"
|
seed: "CompleteSeed!"
|
||||||
model_class: Environment
|
model_class: Environment
|
||||||
model_params:
|
model_params:
|
||||||
topologies:
|
topology:
|
||||||
default:
|
|
||||||
params:
|
params:
|
||||||
generator: complete_graph
|
generator: complete_graph
|
||||||
n: 4
|
n: 4
|
||||||
@@ -19,7 +18,7 @@ model_params:
|
|||||||
state:
|
state:
|
||||||
group: network
|
group: network
|
||||||
times: 1
|
times: 1
|
||||||
topology: 'default'
|
topology: true
|
||||||
distribution:
|
distribution:
|
||||||
- agent_class: CounterModel
|
- agent_class: CounterModel
|
||||||
weight: 0.25
|
weight: 0.25
|
||||||
@@ -42,7 +41,7 @@ model_params:
|
|||||||
fixed:
|
fixed:
|
||||||
- agent_class: BaseAgent
|
- agent_class: BaseAgent
|
||||||
hidden: true
|
hidden: true
|
||||||
topology: null
|
topology: false
|
||||||
state:
|
state:
|
||||||
name: 'Environment Agent 1'
|
name: 'Environment Agent 1'
|
||||||
times: 10
|
times: 10
|
||||||
|
|||||||
@@ -4,21 +4,66 @@ import pytest
|
|||||||
from soil import agents, environment
|
from soil import agents, environment
|
||||||
from soil import time as stime
|
from soil import time as stime
|
||||||
|
|
||||||
|
|
||||||
class Dead(agents.FSM):
|
class Dead(agents.FSM):
|
||||||
@agents.default_state
|
@agents.default_state
|
||||||
@agents.state
|
@agents.state
|
||||||
def only(self):
|
def only(self):
|
||||||
return self.die()
|
return self.die()
|
||||||
|
|
||||||
class TestMain(TestCase):
|
|
||||||
def test_die_raises_exception(self):
|
|
||||||
d = Dead(unique_id=0, model=environment.Environment())
|
|
||||||
d.step()
|
|
||||||
with pytest.raises(agents.DeadAgent):
|
|
||||||
d.step()
|
|
||||||
|
|
||||||
|
class TestMain(TestCase):
|
||||||
def test_die_returns_infinity(self):
|
def test_die_returns_infinity(self):
|
||||||
|
'''The last step of a dead agent should return time.INFINITY'''
|
||||||
d = Dead(unique_id=0, model=environment.Environment())
|
d = Dead(unique_id=0, model=environment.Environment())
|
||||||
ret = d.step().abs(0)
|
ret = d.step().abs(0)
|
||||||
print(ret, 'next')
|
print(ret, "next")
|
||||||
assert ret == stime.INFINITY
|
assert ret == stime.NEVER
|
||||||
|
|
||||||
|
def test_die_raises_exception(self):
|
||||||
|
'''A dead agent should raise an exception if it is stepped after death'''
|
||||||
|
d = Dead(unique_id=0, model=environment.Environment())
|
||||||
|
d.step()
|
||||||
|
with pytest.raises(stime.DeadAgent):
|
||||||
|
d.step()
|
||||||
|
|
||||||
|
|
||||||
|
def test_agent_generator(self):
|
||||||
|
'''
|
||||||
|
The step function of an agent could be a generator. In that case, the state of the
|
||||||
|
agent will be resumed after every call to step.
|
||||||
|
'''
|
||||||
|
a = 0
|
||||||
|
class Gen(agents.BaseAgent):
|
||||||
|
def step(self):
|
||||||
|
nonlocal a
|
||||||
|
for i in range(5):
|
||||||
|
yield
|
||||||
|
a += 1
|
||||||
|
e = environment.Environment()
|
||||||
|
g = Gen(model=e, unique_id=e.next_id())
|
||||||
|
e.schedule.add(g)
|
||||||
|
|
||||||
|
for i in range(5):
|
||||||
|
e.step()
|
||||||
|
assert a == i
|
||||||
|
|
||||||
|
def test_state_decorator(self):
|
||||||
|
class MyAgent(agents.FSM):
|
||||||
|
run = 0
|
||||||
|
@agents.default_state
|
||||||
|
@agents.state('original')
|
||||||
|
def root(self):
|
||||||
|
self.run += 1
|
||||||
|
return self.other
|
||||||
|
|
||||||
|
@agents.state
|
||||||
|
def other(self):
|
||||||
|
self.run += 1
|
||||||
|
|
||||||
|
e = environment.Environment()
|
||||||
|
a = MyAgent(model=e, unique_id=e.next_id())
|
||||||
|
a.step()
|
||||||
|
assert a.run == 1
|
||||||
|
a.step()
|
||||||
|
assert a.run == 2
|
||||||
|
|||||||
@@ -7,9 +7,9 @@ from os.path import join
|
|||||||
from soil import simulation, serialization, config, network, agents, utils
|
from soil import simulation, serialization, config, network, agents, utils
|
||||||
|
|
||||||
ROOT = os.path.abspath(os.path.dirname(__file__))
|
ROOT = os.path.abspath(os.path.dirname(__file__))
|
||||||
EXAMPLES = join(ROOT, '..', 'examples')
|
EXAMPLES = join(ROOT, "..", "examples")
|
||||||
|
|
||||||
FORCE_TESTS = os.environ.get('FORCE_TESTS', '')
|
FORCE_TESTS = os.environ.get("FORCE_TESTS", "")
|
||||||
|
|
||||||
|
|
||||||
def isequal(a, b):
|
def isequal(a, b):
|
||||||
@@ -24,7 +24,6 @@ def isequal(a, b):
|
|||||||
|
|
||||||
|
|
||||||
class TestConfig(TestCase):
|
class TestConfig(TestCase):
|
||||||
|
|
||||||
def test_conversion(self):
|
def test_conversion(self):
|
||||||
expected = serialization.load_file(join(ROOT, "complete_converted.yml"))[0]
|
expected = serialization.load_file(join(ROOT, "complete_converted.yml"))[0]
|
||||||
old = serialization.load_file(join(ROOT, "old_complete.yml"))[0]
|
old = serialization.load_file(join(ROOT, "old_complete.yml"))[0]
|
||||||
@@ -38,7 +37,7 @@ class TestConfig(TestCase):
|
|||||||
The configuration should not change after running
|
The configuration should not change after running
|
||||||
the simulation.
|
the simulation.
|
||||||
"""
|
"""
|
||||||
config = serialization.load_file(join(EXAMPLES, 'complete.yml'))[0]
|
config = serialization.load_file(join(EXAMPLES, "complete.yml"))[0]
|
||||||
s = simulation.from_config(config)
|
s = simulation.from_config(config)
|
||||||
init_config = copy.copy(s.to_dict())
|
init_config = copy.copy(s.to_dict())
|
||||||
|
|
||||||
@@ -47,11 +46,8 @@ class TestConfig(TestCase):
|
|||||||
# del nconfig['to
|
# del nconfig['to
|
||||||
isequal(init_config, nconfig)
|
isequal(init_config, nconfig)
|
||||||
|
|
||||||
|
|
||||||
def test_topology_config(self):
|
def test_topology_config(self):
|
||||||
netconfig = config.NetConfig(**{
|
netconfig = config.NetConfig(**{"path": join(ROOT, "test.gexf")})
|
||||||
'path': join(ROOT, 'test.gexf')
|
|
||||||
})
|
|
||||||
net = network.from_config(netconfig, dir_path=ROOT)
|
net = network.from_config(netconfig, dir_path=ROOT)
|
||||||
assert len(net.nodes) == 2
|
assert len(net.nodes) == 2
|
||||||
assert len(net.edges) == 1
|
assert len(net.edges) == 1
|
||||||
@@ -62,36 +58,33 @@ class TestConfig(TestCase):
|
|||||||
network agents are initialized properly.
|
network agents are initialized properly.
|
||||||
"""
|
"""
|
||||||
cfg = {
|
cfg = {
|
||||||
'name': 'CounterAgent',
|
"name": "CounterAgent",
|
||||||
'network_params': {
|
"network_params": {"path": join(ROOT, "test.gexf")},
|
||||||
'path': join(ROOT, 'test.gexf')
|
"agent_class": "CounterModel",
|
||||||
},
|
|
||||||
'agent_class': 'CounterModel',
|
|
||||||
# 'states': [{'times': 10}, {'times': 20}],
|
# 'states': [{'times': 10}, {'times': 20}],
|
||||||
'max_time': 2,
|
"max_time": 2,
|
||||||
'dry_run': True,
|
"dry_run": True,
|
||||||
'num_trials': 1,
|
"num_trials": 1,
|
||||||
'environment_params': {
|
"environment_params": {},
|
||||||
}
|
|
||||||
}
|
}
|
||||||
conf = config.convert_old(cfg)
|
conf = config.convert_old(cfg)
|
||||||
s = simulation.from_config(conf)
|
s = simulation.from_config(conf)
|
||||||
|
|
||||||
env = s.get_env()
|
env = s.get_env()
|
||||||
assert len(env.topologies['default'].nodes) == 2
|
assert len(env.G.nodes) == 2
|
||||||
assert len(env.topologies['default'].edges) == 1
|
assert len(env.G.edges) == 1
|
||||||
assert len(env.agents) == 2
|
assert len(env.agents) == 2
|
||||||
assert env.agents[0].G == env.topologies['default']
|
assert env.agents[0].G == env.G
|
||||||
|
|
||||||
def test_agents_from_config(self):
|
def test_agents_from_config(self):
|
||||||
'''We test that the known complete configuration produces
|
"""We test that the known complete configuration produces
|
||||||
the right agents in the right groups'''
|
the right agents in the right groups"""
|
||||||
cfg = serialization.load_file(join(ROOT, "complete_converted.yml"))[0]
|
cfg = serialization.load_file(join(ROOT, "complete_converted.yml"))[0]
|
||||||
s = simulation.from_config(cfg)
|
s = simulation.from_config(cfg)
|
||||||
env = s.get_env()
|
env = s.get_env()
|
||||||
assert len(env.topologies['default'].nodes) == 4
|
assert len(env.G.nodes) == 4
|
||||||
assert len(env.agents(group='network')) == 4
|
assert len(env.agents(group="network")) == 4
|
||||||
assert len(env.agents(group='environment')) == 1
|
assert len(env.agents(group="environment")) == 1
|
||||||
|
|
||||||
def test_yaml(self):
|
def test_yaml(self):
|
||||||
"""
|
"""
|
||||||
@@ -100,16 +93,17 @@ class TestConfig(TestCase):
|
|||||||
Values not present in the original config file should have reasonable
|
Values not present in the original config file should have reasonable
|
||||||
defaults.
|
defaults.
|
||||||
"""
|
"""
|
||||||
with utils.timer('loading'):
|
with utils.timer("loading"):
|
||||||
config = serialization.load_file(join(EXAMPLES, 'complete.yml'))[0]
|
config = serialization.load_file(join(EXAMPLES, "complete.yml"))[0]
|
||||||
s = simulation.from_config(config)
|
s = simulation.from_config(config)
|
||||||
with utils.timer('serializing'):
|
with utils.timer("serializing"):
|
||||||
serial = s.to_yaml()
|
serial = s.to_yaml()
|
||||||
with utils.timer('recovering'):
|
with utils.timer("recovering"):
|
||||||
recovered = yaml.load(serial, Loader=yaml.SafeLoader)
|
recovered = yaml.load(serial, Loader=yaml.SafeLoader)
|
||||||
for (k, v) in config.items():
|
for (k, v) in config.items():
|
||||||
assert recovered[k] == v
|
assert recovered[k] == v
|
||||||
|
|
||||||
|
|
||||||
def make_example_test(path, cfg):
|
def make_example_test(path, cfg):
|
||||||
def wrapped(self):
|
def wrapped(self):
|
||||||
root = os.getcwd()
|
root = os.getcwd()
|
||||||
@@ -133,18 +127,19 @@ def make_example_test(path, cfg):
|
|||||||
# assert env.now <= config['max_time'] # But not further than allowed
|
# assert env.now <= config['max_time'] # But not further than allowed
|
||||||
# except KeyError:
|
# except KeyError:
|
||||||
# pass
|
# pass
|
||||||
|
|
||||||
return wrapped
|
return wrapped
|
||||||
|
|
||||||
|
|
||||||
def add_example_tests():
|
def add_example_tests():
|
||||||
for config, path in serialization.load_files(
|
for config, path in serialization.load_files(
|
||||||
join(EXAMPLES, '*', '*.yml'),
|
join(EXAMPLES, "*", "*.yml"),
|
||||||
join(EXAMPLES, '*.yml'),
|
join(EXAMPLES, "*.yml"),
|
||||||
):
|
):
|
||||||
p = make_example_test(path=path, cfg=config)
|
p = make_example_test(path=path, cfg=config)
|
||||||
fname = os.path.basename(path)
|
fname = os.path.basename(path)
|
||||||
p.__name__ = 'test_example_file_%s' % fname
|
p.__name__ = "test_example_file_%s" % fname
|
||||||
p.__doc__ = '%s should be a valid configuration' % fname
|
p.__doc__ = "%s should be a valid configuration" % fname
|
||||||
setattr(TestConfig, p.__name__, p)
|
setattr(TestConfig, p.__name__, p)
|
||||||
del p
|
del p
|
||||||
|
|
||||||
|
|||||||
@@ -5,9 +5,9 @@ from os.path import join
|
|||||||
from soil import serialization, simulation, config
|
from soil import serialization, simulation, config
|
||||||
|
|
||||||
ROOT = os.path.abspath(os.path.dirname(__file__))
|
ROOT = os.path.abspath(os.path.dirname(__file__))
|
||||||
EXAMPLES = join(ROOT, '..', 'examples')
|
EXAMPLES = join(ROOT, "..", "examples")
|
||||||
|
|
||||||
FORCE_TESTS = os.environ.get('FORCE_TESTS', '')
|
FORCE_TESTS = os.environ.get("FORCE_TESTS", "")
|
||||||
|
|
||||||
|
|
||||||
class TestExamples(TestCase):
|
class TestExamples(TestCase):
|
||||||
@@ -23,31 +23,31 @@ def make_example_test(path, cfg):
|
|||||||
s.max_steps = 100
|
s.max_steps = 100
|
||||||
s.num_trials = 1
|
s.num_trials = 1
|
||||||
assert isinstance(cfg, config.Config)
|
assert isinstance(cfg, config.Config)
|
||||||
if getattr(cfg, 'skip_test', False) and not FORCE_TESTS:
|
if getattr(cfg, "skip_test", False) and not FORCE_TESTS:
|
||||||
self.skipTest('Example ignored.')
|
self.skipTest("Example ignored.")
|
||||||
envs = s.run_simulation(dry_run=True)
|
envs = s.run_simulation(dry_run=True)
|
||||||
assert envs
|
assert envs
|
||||||
for env in envs:
|
for env in envs:
|
||||||
assert env
|
assert env
|
||||||
try:
|
try:
|
||||||
n = cfg.model_params['network_params']['n']
|
n = cfg.model_params["network_params"]["n"]
|
||||||
assert len(list(env.network_agents)) == n
|
assert len(list(env.network_agents)) == n
|
||||||
except KeyError:
|
except KeyError:
|
||||||
pass
|
pass
|
||||||
assert env.schedule.steps > 0 # It has run
|
assert env.schedule.steps > 0 # It has run
|
||||||
assert env.schedule.steps <= s.max_steps # But not further than allowed
|
assert env.schedule.steps <= s.max_steps # But not further than allowed
|
||||||
|
|
||||||
return wrapped
|
return wrapped
|
||||||
|
|
||||||
|
|
||||||
def add_example_tests():
|
def add_example_tests():
|
||||||
for cfg, path in serialization.load_files(
|
for cfg, path in serialization.load_files(
|
||||||
join(EXAMPLES, '*', '*.yml'),
|
join(EXAMPLES, "**", "*.yml"),
|
||||||
join(EXAMPLES, '*.yml'),
|
|
||||||
):
|
):
|
||||||
p = make_example_test(path=path, cfg=config.Config.from_raw(cfg))
|
p = make_example_test(path=path, cfg=config.Config.from_raw(cfg))
|
||||||
fname = os.path.basename(path)
|
fname = os.path.basename(path)
|
||||||
p.__name__ = 'test_example_file_%s' % fname
|
p.__name__ = "test_example_file_%s" % fname
|
||||||
p.__doc__ = '%s should be a valid configuration' % fname
|
p.__doc__ = "%s should be a valid configuration" % fname
|
||||||
setattr(TestExamples, p.__name__, p)
|
setattr(TestExamples, p.__name__, p)
|
||||||
del p
|
del p
|
||||||
|
|
||||||
|
|||||||
@@ -2,6 +2,7 @@ import os
|
|||||||
import io
|
import io
|
||||||
import tempfile
|
import tempfile
|
||||||
import shutil
|
import shutil
|
||||||
|
import sqlite3
|
||||||
|
|
||||||
from unittest import TestCase
|
from unittest import TestCase
|
||||||
from soil import exporters
|
from soil import exporters
|
||||||
@@ -40,20 +41,15 @@ class Exporters(TestCase):
|
|||||||
num_trials = 5
|
num_trials = 5
|
||||||
max_time = 2
|
max_time = 2
|
||||||
config = {
|
config = {
|
||||||
'name': 'exporter_sim',
|
"name": "exporter_sim",
|
||||||
'model_params': {
|
"model_params": {"agents": [{"agent_class": agents.BaseAgent}]},
|
||||||
'agents': [{
|
"max_time": max_time,
|
||||||
'agent_class': agents.BaseAgent
|
"num_trials": num_trials,
|
||||||
}]
|
|
||||||
},
|
|
||||||
'max_time': max_time,
|
|
||||||
'num_trials': num_trials,
|
|
||||||
}
|
}
|
||||||
s = simulation.from_config(config)
|
s = simulation.from_config(config)
|
||||||
|
|
||||||
for env in s.run_simulation(exporters=[Dummy], dry_run=True):
|
for env in s.run_simulation(exporters=[Dummy], dry_run=True):
|
||||||
assert len(env.agents) == 1
|
assert len(env.agents) == 1
|
||||||
assert env.now == max_time
|
|
||||||
|
|
||||||
assert Dummy.started
|
assert Dummy.started
|
||||||
assert Dummy.ended
|
assert Dummy.ended
|
||||||
@@ -64,40 +60,52 @@ class Exporters(TestCase):
|
|||||||
assert Dummy.total_time == max_time * num_trials
|
assert Dummy.total_time == max_time * num_trials
|
||||||
|
|
||||||
def test_writing(self):
|
def test_writing(self):
|
||||||
'''Try to write CSV, sqlite and YAML (without dry_run)'''
|
"""Try to write CSV, sqlite and YAML (without dry_run)"""
|
||||||
n_trials = 5
|
n_trials = 5
|
||||||
config = {
|
config = {
|
||||||
'name': 'exporter_sim',
|
"name": "exporter_sim",
|
||||||
'network_params': {
|
"network_params": {"generator": "complete_graph", "n": 4},
|
||||||
'generator': 'complete_graph',
|
"agent_class": "CounterModel",
|
||||||
'n': 4
|
"max_time": 2,
|
||||||
},
|
"num_trials": n_trials,
|
||||||
'agent_class': 'CounterModel',
|
"dry_run": False,
|
||||||
'max_time': 2,
|
"environment_params": {},
|
||||||
'num_trials': n_trials,
|
|
||||||
'dry_run': False,
|
|
||||||
'environment_params': {}
|
|
||||||
}
|
}
|
||||||
output = io.StringIO()
|
output = io.StringIO()
|
||||||
s = simulation.from_config(config)
|
s = simulation.from_config(config)
|
||||||
tmpdir = tempfile.mkdtemp()
|
tmpdir = tempfile.mkdtemp()
|
||||||
envs = s.run_simulation(exporters=[
|
envs = s.run_simulation(
|
||||||
|
exporters=[
|
||||||
exporters.default,
|
exporters.default,
|
||||||
exporters.csv,
|
exporters.csv,
|
||||||
],
|
],
|
||||||
|
model_params={
|
||||||
|
"agent_reporters": {"times": "times"},
|
||||||
|
"model_reporters": {
|
||||||
|
"constant": lambda x: 1,
|
||||||
|
},
|
||||||
|
},
|
||||||
dry_run=False,
|
dry_run=False,
|
||||||
outdir=tmpdir,
|
outdir=tmpdir,
|
||||||
exporter_params={'copy_to': output})
|
exporter_params={"copy_to": output},
|
||||||
|
)
|
||||||
result = output.getvalue()
|
result = output.getvalue()
|
||||||
|
|
||||||
simdir = os.path.join(tmpdir, s.group or '', s.name)
|
simdir = os.path.join(tmpdir, s.group or "", s.name)
|
||||||
with open(os.path.join(simdir, '{}.dumped.yml'.format(s.name))) as f:
|
with open(os.path.join(simdir, "{}.dumped.yml".format(s.name))) as f:
|
||||||
result = f.read()
|
result = f.read()
|
||||||
assert result
|
assert result
|
||||||
|
|
||||||
try:
|
try:
|
||||||
for e in envs:
|
for e in envs:
|
||||||
with open(os.path.join(simdir, '{}.env.csv'.format(e.id))) as f:
|
db = sqlite3.connect(os.path.join(simdir, f"{s.name}.sqlite"))
|
||||||
|
cur = db.cursor()
|
||||||
|
agent_entries = cur.execute("SELECT * from agents").fetchall()
|
||||||
|
env_entries = cur.execute("SELECT * from env").fetchall()
|
||||||
|
assert len(agent_entries) > 0
|
||||||
|
assert len(env_entries) > 0
|
||||||
|
|
||||||
|
with open(os.path.join(simdir, "{}.env.csv".format(e.id))) as f:
|
||||||
result = f.read()
|
result = f.read()
|
||||||
assert result
|
assert result
|
||||||
finally:
|
finally:
|
||||||
|
|||||||
@@ -6,60 +6,55 @@ import networkx as nx
|
|||||||
from functools import partial
|
from functools import partial
|
||||||
|
|
||||||
from os.path import join
|
from os.path import join
|
||||||
from soil import (simulation, Environment, agents, network, serialization,
|
from soil import simulation, Environment, agents, network, serialization, utils, config
|
||||||
utils, config)
|
|
||||||
from soil.time import Delta
|
from soil.time import Delta
|
||||||
|
|
||||||
ROOT = os.path.abspath(os.path.dirname(__file__))
|
ROOT = os.path.abspath(os.path.dirname(__file__))
|
||||||
EXAMPLES = join(ROOT, '..', 'examples')
|
EXAMPLES = join(ROOT, "..", "examples")
|
||||||
|
|
||||||
|
|
||||||
class CustomAgent(agents.FSM, agents.NetworkAgent):
|
class CustomAgent(agents.FSM, agents.NetworkAgent):
|
||||||
@agents.default_state
|
@agents.default_state
|
||||||
@agents.state
|
@agents.state
|
||||||
def normal(self):
|
def normal(self):
|
||||||
self.neighbors = self.count_agents(state_id='normal',
|
self.neighbors = self.count_agents(state_id="normal", limit_neighbors=True)
|
||||||
limit_neighbors=True)
|
|
||||||
@agents.state
|
@agents.state
|
||||||
def unreachable(self):
|
def unreachable(self):
|
||||||
return
|
return
|
||||||
|
|
||||||
|
|
||||||
class TestMain(TestCase):
|
class TestMain(TestCase):
|
||||||
|
|
||||||
def test_empty_simulation(self):
|
def test_empty_simulation(self):
|
||||||
"""A simulation with a base behaviour should do nothing"""
|
"""A simulation with a base behaviour should do nothing"""
|
||||||
config = {
|
config = {
|
||||||
'model_params': {
|
"model_params": {
|
||||||
'network_params': {
|
"network_params": {"path": join(ROOT, "test.gexf")},
|
||||||
'path': join(ROOT, 'test.gexf')
|
"agent_class": "BaseAgent",
|
||||||
},
|
|
||||||
'agent_class': 'BaseAgent',
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
s = simulation.from_config(config)
|
s = simulation.from_config(config)
|
||||||
s.run_simulation(dry_run=True)
|
s.run_simulation(dry_run=True)
|
||||||
|
|
||||||
|
|
||||||
def test_network_agent(self):
|
def test_network_agent(self):
|
||||||
"""
|
"""
|
||||||
The initial states should be applied to the agent and the
|
The initial states should be applied to the agent and the
|
||||||
agent should be able to update its state."""
|
agent should be able to update its state."""
|
||||||
config = {
|
config = {
|
||||||
'name': 'CounterAgent',
|
"name": "CounterAgent",
|
||||||
'num_trials': 1,
|
"num_trials": 1,
|
||||||
'max_time': 2,
|
"max_time": 2,
|
||||||
'model_params': {
|
"model_params": {
|
||||||
'network_params': {
|
"network_params": {
|
||||||
'generator': nx.complete_graph,
|
"generator": nx.complete_graph,
|
||||||
'n': 2,
|
"n": 2,
|
||||||
|
},
|
||||||
|
"agent_class": "CounterModel",
|
||||||
|
"states": {
|
||||||
|
0: {"times": 10},
|
||||||
|
1: {"times": 20},
|
||||||
},
|
},
|
||||||
'agent_class': 'CounterModel',
|
|
||||||
'states': {
|
|
||||||
0: {'times': 10},
|
|
||||||
1: {'times': 20},
|
|
||||||
},
|
},
|
||||||
}
|
|
||||||
}
|
}
|
||||||
s = simulation.from_config(config)
|
s = simulation.from_config(config)
|
||||||
|
|
||||||
@@ -68,48 +63,41 @@ class TestMain(TestCase):
|
|||||||
The initial states should be applied to the agent and the
|
The initial states should be applied to the agent and the
|
||||||
agent should be able to update its state."""
|
agent should be able to update its state."""
|
||||||
config = {
|
config = {
|
||||||
'version': '2',
|
"version": "2",
|
||||||
'name': 'CounterAgent',
|
"name": "CounterAgent",
|
||||||
'dry_run': True,
|
"dry_run": True,
|
||||||
'num_trials': 1,
|
"num_trials": 1,
|
||||||
'max_time': 2,
|
"max_time": 2,
|
||||||
'model_params': {
|
"model_params": {
|
||||||
'topologies': {
|
"topology": {"path": join(ROOT, "test.gexf")},
|
||||||
'default': {
|
"agents": {
|
||||||
'path': join(ROOT, 'test.gexf')
|
"agent_class": "CounterModel",
|
||||||
}
|
"topology": True,
|
||||||
|
"fixed": [{"state": {"times": 10}}, {"state": {"times": 20}}],
|
||||||
|
},
|
||||||
},
|
},
|
||||||
'agents': {
|
|
||||||
'agent_class': 'CounterModel',
|
|
||||||
'topology': 'default',
|
|
||||||
'fixed': [{'state': {'times': 10}}, {'state': {'times': 20}}],
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
s = simulation.from_config(config)
|
s = simulation.from_config(config)
|
||||||
env = s.get_env()
|
env = s.get_env()
|
||||||
assert isinstance(env.agents[0], agents.CounterModel)
|
assert isinstance(env.agents[0], agents.CounterModel)
|
||||||
assert env.agents[0].G == env.topologies['default']
|
assert env.agents[0].G == env.G
|
||||||
assert env.agents[0]['times'] == 10
|
assert env.agents[0]["times"] == 10
|
||||||
assert env.agents[0]['times'] == 10
|
assert env.agents[0]["times"] == 10
|
||||||
env.step()
|
env.step()
|
||||||
assert env.agents[0]['times'] == 11
|
assert env.agents[0]["times"] == 11
|
||||||
assert env.agents[1]['times'] == 21
|
assert env.agents[1]["times"] == 21
|
||||||
|
|
||||||
def test_init_and_count_agents(self):
|
def test_init_and_count_agents(self):
|
||||||
"""Agents should be properly initialized and counting should filter them properly"""
|
"""Agents should be properly initialized and counting should filter them properly"""
|
||||||
# TODO: separate this test into two or more test cases
|
# TODO: separate this test into two or more test cases
|
||||||
config = {
|
config = {
|
||||||
'max_time': 10,
|
"max_time": 10,
|
||||||
'model_params': {
|
"model_params": {
|
||||||
'agents': [{'agent_class': CustomAgent, 'weight': 1, 'topology': 'default'},
|
"agents": [
|
||||||
{'agent_class': CustomAgent, 'weight': 3, 'topology': 'default'},
|
{"agent_class": CustomAgent, "weight": 1, "topology": True},
|
||||||
|
{"agent_class": CustomAgent, "weight": 3, "topology": True},
|
||||||
],
|
],
|
||||||
'topologies': {
|
"topology": {"path": join(ROOT, "test.gexf")},
|
||||||
'default': {
|
|
||||||
'path': join(ROOT, 'test.gexf')
|
|
||||||
}
|
|
||||||
},
|
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
s = simulation.from_config(config)
|
s = simulation.from_config(config)
|
||||||
@@ -120,40 +108,45 @@ class TestMain(TestCase):
|
|||||||
assert env.count_agents(weight=3) == 1
|
assert env.count_agents(weight=3) == 1
|
||||||
assert env.count_agents(agent_class=CustomAgent) == 2
|
assert env.count_agents(agent_class=CustomAgent) == 2
|
||||||
|
|
||||||
|
|
||||||
def test_torvalds_example(self):
|
def test_torvalds_example(self):
|
||||||
"""A complete example from a documentation should work."""
|
"""A complete example from a documentation should work."""
|
||||||
config = serialization.load_file(join(EXAMPLES, 'torvalds.yml'))[0]
|
config = serialization.load_file(join(EXAMPLES, "torvalds.yml"))[0]
|
||||||
config['model_params']['network_params']['path'] = join(EXAMPLES,
|
config["model_params"]["network_params"]["path"] = join(
|
||||||
config['model_params']['network_params']['path'])
|
EXAMPLES, config["model_params"]["network_params"]["path"]
|
||||||
|
)
|
||||||
s = simulation.from_config(config)
|
s = simulation.from_config(config)
|
||||||
env = s.run_simulation(dry_run=True)[0]
|
env = s.run_simulation(dry_run=True)[0]
|
||||||
for a in env.network_agents:
|
for a in env.network_agents:
|
||||||
skill_level = a.state['skill_level']
|
skill_level = a.state["skill_level"]
|
||||||
if a.id == 'Torvalds':
|
if a.id == "Torvalds":
|
||||||
assert skill_level == 'God'
|
assert skill_level == "God"
|
||||||
assert a.state['total'] == 3
|
assert a.state["total"] == 3
|
||||||
assert a.state['neighbors'] == 2
|
assert a.state["neighbors"] == 2
|
||||||
elif a.id == 'balkian':
|
elif a.id == "balkian":
|
||||||
assert skill_level == 'developer'
|
assert skill_level == "developer"
|
||||||
assert a.state['total'] == 3
|
assert a.state["total"] == 3
|
||||||
assert a.state['neighbors'] == 1
|
assert a.state["neighbors"] == 1
|
||||||
else:
|
else:
|
||||||
assert skill_level == 'beginner'
|
assert skill_level == "beginner"
|
||||||
assert a.state['total'] == 3
|
assert a.state["total"] == 3
|
||||||
assert a.state['neighbors'] == 1
|
assert a.state["neighbors"] == 1
|
||||||
|
|
||||||
def test_serialize_class(self):
|
def test_serialize_class(self):
|
||||||
ser, name = serialization.serialize(agents.BaseAgent, known_modules=[])
|
ser, name = serialization.serialize(agents.BaseAgent, known_modules=[])
|
||||||
assert name == 'soil.agents.BaseAgent'
|
assert name == "soil.agents.BaseAgent"
|
||||||
assert ser == agents.BaseAgent
|
assert ser == agents.BaseAgent
|
||||||
|
|
||||||
ser, name = serialization.serialize(agents.BaseAgent, known_modules=['soil', ])
|
ser, name = serialization.serialize(
|
||||||
assert name == 'BaseAgent'
|
agents.BaseAgent,
|
||||||
|
known_modules=[
|
||||||
|
"soil",
|
||||||
|
],
|
||||||
|
)
|
||||||
|
assert name == "BaseAgent"
|
||||||
assert ser == agents.BaseAgent
|
assert ser == agents.BaseAgent
|
||||||
|
|
||||||
ser, name = serialization.serialize(CustomAgent)
|
ser, name = serialization.serialize(CustomAgent)
|
||||||
assert name == 'test_main.CustomAgent'
|
assert name == "test_main.CustomAgent"
|
||||||
assert ser == CustomAgent
|
assert ser == CustomAgent
|
||||||
pickle.dumps(ser)
|
pickle.dumps(ser)
|
||||||
|
|
||||||
@@ -166,72 +159,43 @@ class TestMain(TestCase):
|
|||||||
assert i == des
|
assert i == des
|
||||||
|
|
||||||
def test_serialize_agent_class(self):
|
def test_serialize_agent_class(self):
|
||||||
'''A class from soil.agents should be serialized without the module part'''
|
"""A class from soil.agents should be serialized without the module part"""
|
||||||
ser = agents.serialize_type(CustomAgent)
|
ser = agents._serialize_type(CustomAgent)
|
||||||
assert ser == 'test_main.CustomAgent'
|
assert ser == "test_main.CustomAgent"
|
||||||
ser = agents.serialize_type(agents.BaseAgent)
|
ser = agents._serialize_type(agents.BaseAgent)
|
||||||
assert ser == 'BaseAgent'
|
assert ser == "BaseAgent"
|
||||||
pickle.dumps(ser)
|
pickle.dumps(ser)
|
||||||
|
|
||||||
def test_deserialize_agent_distribution(self):
|
|
||||||
agent_distro = [
|
|
||||||
{
|
|
||||||
'agent_class': 'CounterModel',
|
|
||||||
'weight': 1
|
|
||||||
},
|
|
||||||
{
|
|
||||||
'agent_class': 'test_main.CustomAgent',
|
|
||||||
'weight': 2
|
|
||||||
},
|
|
||||||
]
|
|
||||||
converted = agents.deserialize_definition(agent_distro)
|
|
||||||
assert converted[0]['agent_class'] == agents.CounterModel
|
|
||||||
assert converted[1]['agent_class'] == CustomAgent
|
|
||||||
pickle.dumps(converted)
|
|
||||||
|
|
||||||
def test_serialize_agent_distribution(self):
|
|
||||||
agent_distro = [
|
|
||||||
{
|
|
||||||
'agent_class': agents.CounterModel,
|
|
||||||
'weight': 1
|
|
||||||
},
|
|
||||||
{
|
|
||||||
'agent_class': CustomAgent,
|
|
||||||
'weight': 2
|
|
||||||
},
|
|
||||||
]
|
|
||||||
converted = agents.serialize_definition(agent_distro)
|
|
||||||
assert converted[0]['agent_class'] == 'CounterModel'
|
|
||||||
assert converted[1]['agent_class'] == 'test_main.CustomAgent'
|
|
||||||
pickle.dumps(converted)
|
|
||||||
|
|
||||||
def test_templates(self):
|
def test_templates(self):
|
||||||
'''Loading a template should result in several configs'''
|
"""Loading a template should result in several configs"""
|
||||||
configs = serialization.load_file(join(EXAMPLES, 'template.yml'))
|
configs = serialization.load_file(join(EXAMPLES, "template.yml"))
|
||||||
assert len(configs) > 0
|
assert len(configs) > 0
|
||||||
|
|
||||||
def test_until(self):
|
def test_until(self):
|
||||||
config = {
|
config = {
|
||||||
'name': 'until_sim',
|
"name": "until_sim",
|
||||||
'model_params': {
|
"model_params": {
|
||||||
'network_params': {},
|
"network_params": {},
|
||||||
'agents': {
|
"agents": {
|
||||||
'fixed': [{
|
"fixed": [
|
||||||
'agent_class': agents.BaseAgent,
|
{
|
||||||
}]
|
"agent_class": agents.BaseAgent,
|
||||||
|
}
|
||||||
|
]
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
'max_time': 2,
|
"max_time": 2,
|
||||||
'num_trials': 50,
|
"num_trials": 50,
|
||||||
}
|
}
|
||||||
s = simulation.from_config(config)
|
s = simulation.from_config(config)
|
||||||
runs = list(s.run_simulation(dry_run=True))
|
runs = list(s.run_simulation(dry_run=True))
|
||||||
over = list(x.now for x in runs if x.now > 2)
|
over = list(x.now for x in runs if x.now > 2)
|
||||||
assert len(runs) == config['num_trials']
|
assert len(runs) == config["num_trials"]
|
||||||
assert len(over) == 0
|
assert len(over) == 0
|
||||||
|
|
||||||
def test_fsm(self):
|
def test_fsm(self):
|
||||||
'''Basic state change'''
|
"""Basic state change"""
|
||||||
|
|
||||||
class ToggleAgent(agents.FSM):
|
class ToggleAgent(agents.FSM):
|
||||||
@agents.default_state
|
@agents.default_state
|
||||||
@agents.state
|
@agents.state
|
||||||
@@ -250,7 +214,8 @@ class TestMain(TestCase):
|
|||||||
assert a.state_id == a.ping.id
|
assert a.state_id == a.ping.id
|
||||||
|
|
||||||
def test_fsm_when(self):
|
def test_fsm_when(self):
|
||||||
'''Basic state change'''
|
"""Basic state change"""
|
||||||
|
|
||||||
class ToggleAgent(agents.FSM):
|
class ToggleAgent(agents.FSM):
|
||||||
@agents.default_state
|
@agents.default_state
|
||||||
@agents.state
|
@agents.state
|
||||||
|
|||||||
@@ -1,4 +1,4 @@
|
|||||||
'''
|
"""
|
||||||
Mesa-SOIL integration tests
|
Mesa-SOIL integration tests
|
||||||
|
|
||||||
We have to test that:
|
We have to test that:
|
||||||
@@ -8,13 +8,15 @@ We have to test that:
|
|||||||
|
|
||||||
- Mesa visualizations work with SOIL simulations
|
- Mesa visualizations work with SOIL simulations
|
||||||
|
|
||||||
'''
|
"""
|
||||||
from mesa import Agent, Model
|
from mesa import Agent, Model
|
||||||
from mesa.time import RandomActivation
|
from mesa.time import RandomActivation
|
||||||
from mesa.space import MultiGrid
|
from mesa.space import MultiGrid
|
||||||
|
|
||||||
|
|
||||||
class MoneyAgent(Agent):
|
class MoneyAgent(Agent):
|
||||||
"""An agent with fixed initial wealth."""
|
"""An agent with fixed initial wealth."""
|
||||||
|
|
||||||
def __init__(self, unique_id, model):
|
def __init__(self, unique_id, model):
|
||||||
super().__init__(unique_id, model)
|
super().__init__(unique_id, model)
|
||||||
self.wealth = 1
|
self.wealth = 1
|
||||||
@@ -33,15 +35,15 @@ class MoneyAgent(Agent):
|
|||||||
|
|
||||||
def move(self):
|
def move(self):
|
||||||
possible_steps = self.model.grid.get_neighborhood(
|
possible_steps = self.model.grid.get_neighborhood(
|
||||||
self.pos,
|
self.pos, moore=True, include_center=False
|
||||||
moore=True,
|
)
|
||||||
include_center=False)
|
|
||||||
new_position = self.random.choice(possible_steps)
|
new_position = self.random.choice(possible_steps)
|
||||||
self.model.grid.move_agent(self, new_position)
|
self.model.grid.move_agent(self, new_position)
|
||||||
|
|
||||||
|
|
||||||
class MoneyModel(Model):
|
class MoneyModel(Model):
|
||||||
"""A model with some number of agents."""
|
"""A model with some number of agents."""
|
||||||
|
|
||||||
def __init__(self, N, width, height):
|
def __init__(self, N, width, height):
|
||||||
self.num_agents = N
|
self.num_agents = N
|
||||||
self.grid = MultiGrid(width, height, True)
|
self.grid = MultiGrid(width, height, True)
|
||||||
@@ -58,7 +60,7 @@ class MoneyModel(Model):
|
|||||||
self.grid.place_agent(a, (x, y))
|
self.grid.place_agent(a, (x, y))
|
||||||
|
|
||||||
def step(self):
|
def step(self):
|
||||||
'''Advance the model by one step.'''
|
"""Advance the model by one step."""
|
||||||
self.schedule.step()
|
self.schedule.step()
|
||||||
|
|
||||||
|
|
||||||
|
|||||||
@@ -10,7 +10,7 @@ from soil import config, network, environment, agents, simulation
|
|||||||
from test_main import CustomAgent
|
from test_main import CustomAgent
|
||||||
|
|
||||||
ROOT = os.path.abspath(os.path.dirname(__file__))
|
ROOT = os.path.abspath(os.path.dirname(__file__))
|
||||||
EXAMPLES = join(ROOT, '..', 'examples')
|
EXAMPLES = join(ROOT, "..", "examples")
|
||||||
|
|
||||||
|
|
||||||
class TestNetwork(TestCase):
|
class TestNetwork(TestCase):
|
||||||
@@ -19,21 +19,13 @@ class TestNetwork(TestCase):
|
|||||||
Load a graph from file if the extension is known.
|
Load a graph from file if the extension is known.
|
||||||
Raise an exception otherwise.
|
Raise an exception otherwise.
|
||||||
"""
|
"""
|
||||||
config = {
|
config = {"network_params": {"path": join(ROOT, "test.gexf")}}
|
||||||
'network_params': {
|
G = network.from_config(config["network_params"])
|
||||||
'path': join(ROOT, 'test.gexf')
|
|
||||||
}
|
|
||||||
}
|
|
||||||
G = network.from_config(config['network_params'])
|
|
||||||
assert G
|
assert G
|
||||||
assert len(G) == 2
|
assert len(G) == 2
|
||||||
with self.assertRaises(AttributeError):
|
with self.assertRaises(AttributeError):
|
||||||
config = {
|
config = {"network_params": {"path": join(ROOT, "unknown.extension")}}
|
||||||
'network_params': {
|
G = network.from_config(config["network_params"])
|
||||||
'path': join(ROOT, 'unknown.extension')
|
|
||||||
}
|
|
||||||
}
|
|
||||||
G = network.from_config(config['network_params'])
|
|
||||||
print(G)
|
print(G)
|
||||||
|
|
||||||
def test_generate_barabasi(self):
|
def test_generate_barabasi(self):
|
||||||
@@ -41,15 +33,11 @@ class TestNetwork(TestCase):
|
|||||||
If no path is given, a generator and network parameters
|
If no path is given, a generator and network parameters
|
||||||
should be used to generate a network
|
should be used to generate a network
|
||||||
"""
|
"""
|
||||||
cfg = {
|
cfg = {"params": {"generator": "barabasi_albert_graph"}}
|
||||||
'params': {
|
|
||||||
'generator': 'barabasi_albert_graph'
|
|
||||||
}
|
|
||||||
}
|
|
||||||
with self.assertRaises(Exception):
|
with self.assertRaises(Exception):
|
||||||
G = network.from_config(cfg)
|
G = network.from_config(cfg)
|
||||||
cfg['params']['n'] = 100
|
cfg["params"]["n"] = 100
|
||||||
cfg['params']['m'] = 10
|
cfg["params"]["m"] = 10
|
||||||
G = network.from_config(cfg)
|
G = network.from_config(cfg)
|
||||||
assert len(G) == 100
|
assert len(G) == 100
|
||||||
|
|
||||||
@@ -61,68 +49,57 @@ class TestNetwork(TestCase):
|
|||||||
G = nx.random_geometric_graph(20, 0.1)
|
G = nx.random_geometric_graph(20, 0.1)
|
||||||
env = environment.NetworkEnvironment(topology=G)
|
env = environment.NetworkEnvironment(topology=G)
|
||||||
f = io.BytesIO()
|
f = io.BytesIO()
|
||||||
assert env.topologies['default']
|
assert env.G
|
||||||
network.dump_gexf(env.topologies['default'], f)
|
network.dump_gexf(env.G, f)
|
||||||
|
|
||||||
def test_networkenvironment_creation(self):
|
def test_networkenvironment_creation(self):
|
||||||
"""Networkenvironment should accept netconfig as parameters"""
|
"""Networkenvironment should accept netconfig as parameters"""
|
||||||
model_params = {
|
model_params = {
|
||||||
'topologies': {
|
"topology": {"path": join(ROOT, "test.gexf")},
|
||||||
'default': {
|
"agents": {
|
||||||
'path': join(ROOT, 'test.gexf')
|
"topology": True,
|
||||||
|
"distribution": [
|
||||||
|
{
|
||||||
|
"agent_class": CustomAgent,
|
||||||
}
|
}
|
||||||
|
],
|
||||||
},
|
},
|
||||||
'agents': {
|
|
||||||
'topology': 'default',
|
|
||||||
'distribution': [{
|
|
||||||
'agent_class': CustomAgent,
|
|
||||||
}]
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
env = environment.Environment(**model_params)
|
env = environment.Environment(**model_params)
|
||||||
assert env.topologies
|
assert env.G
|
||||||
env.step()
|
env.step()
|
||||||
assert len(env.topologies['default']) == 2
|
assert len(env.G) == 2
|
||||||
assert len(env.agents) == 2
|
assert len(env.agents) == 2
|
||||||
assert env.agents[1].count_agents(state_id='normal') == 2
|
assert env.agents[1].count_agents(state_id="normal") == 2
|
||||||
assert env.agents[1].count_agents(state_id='normal', limit_neighbors=True) == 1
|
assert env.agents[1].count_agents(state_id="normal", limit_neighbors=True) == 1
|
||||||
assert env.agents[0].neighbors == 1
|
assert env.agents[0].neighbors == 1
|
||||||
|
|
||||||
def test_custom_agent_neighbors(self):
|
def test_custom_agent_neighbors(self):
|
||||||
"""Allow for search of neighbors with a certain state_id"""
|
"""Allow for search of neighbors with a certain state_id"""
|
||||||
config = {
|
config = {
|
||||||
'model_params': {
|
"model_params": {
|
||||||
'topologies': {
|
"topology": {"path": join(ROOT, "test.gexf")},
|
||||||
'default': {
|
"agents": {
|
||||||
'path': join(ROOT, 'test.gexf')
|
"topology": True,
|
||||||
}
|
"distribution": [{"weight": 1, "agent_class": CustomAgent}],
|
||||||
},
|
},
|
||||||
'agents': {
|
|
||||||
'topology': 'default',
|
|
||||||
'distribution': [
|
|
||||||
{
|
|
||||||
'weight': 1,
|
|
||||||
'agent_class': CustomAgent
|
|
||||||
}
|
|
||||||
]
|
|
||||||
}
|
|
||||||
},
|
},
|
||||||
'max_time': 10,
|
"max_time": 10,
|
||||||
}
|
}
|
||||||
s = simulation.from_config(config)
|
s = simulation.from_config(config)
|
||||||
env = s.run_simulation(dry_run=True)[0]
|
env = s.run_simulation(dry_run=True)[0]
|
||||||
assert env.agents[1].count_agents(state_id='normal') == 2
|
assert env.agents[1].count_agents(state_id="normal") == 2
|
||||||
assert env.agents[1].count_agents(state_id='normal', limit_neighbors=True) == 1
|
assert env.agents[1].count_agents(state_id="normal", limit_neighbors=True) == 1
|
||||||
assert env.agents[0].neighbors == 1
|
assert env.agents[0].neighbors == 1
|
||||||
|
|
||||||
def test_subgraph(self):
|
def test_subgraph(self):
|
||||||
'''An agent should be able to subgraph the global topology'''
|
"""An agent should be able to subgraph the global topology"""
|
||||||
G = nx.Graph()
|
G = nx.Graph()
|
||||||
G.add_node(3)
|
G.add_node(3)
|
||||||
G.add_edge(1, 2)
|
G.add_edge(1, 2)
|
||||||
distro = agents.calculate_distribution(agent_class=agents.NetworkAgent)
|
distro = agents.calculate_distribution(agent_class=agents.NetworkAgent)
|
||||||
aconfig = config.AgentConfig(distribution=distro, topology='default')
|
aconfig = config.AgentConfig(distribution=distro, topology=True)
|
||||||
env = environment.Environment(name='Test', topologies={'default': G}, agents=aconfig)
|
env = environment.Environment(name="Test", topology=G, agents=aconfig)
|
||||||
lst = list(env.network_agents)
|
lst = list(env.network_agents)
|
||||||
|
|
||||||
a2 = env.find_one(node_id=2)
|
a2 = env.find_one(node_id=2)
|
||||||
|
|||||||
74
tests/test_time.py
Normal file
74
tests/test_time.py
Normal file
@@ -0,0 +1,74 @@
|
|||||||
|
from unittest import TestCase
|
||||||
|
|
||||||
|
from soil import time, agents, environment
|
||||||
|
|
||||||
|
class TestMain(TestCase):
|
||||||
|
def test_cond(self):
|
||||||
|
'''
|
||||||
|
A condition should match a When if the concition is True
|
||||||
|
'''
|
||||||
|
|
||||||
|
t = time.Cond(lambda t: True)
|
||||||
|
f = time.Cond(lambda t: False)
|
||||||
|
for i in range(10):
|
||||||
|
w = time.When(i)
|
||||||
|
assert w == t
|
||||||
|
assert w is not f
|
||||||
|
|
||||||
|
def test_cond(self):
|
||||||
|
'''
|
||||||
|
Comparing a Cond to a Delta should always return False
|
||||||
|
'''
|
||||||
|
|
||||||
|
c = time.Cond(lambda t: False)
|
||||||
|
d = time.Delta(1)
|
||||||
|
assert c is not d
|
||||||
|
|
||||||
|
def test_cond_env(self):
|
||||||
|
'''
|
||||||
|
'''
|
||||||
|
|
||||||
|
times_started = []
|
||||||
|
times_awakened = []
|
||||||
|
times = []
|
||||||
|
done = 0
|
||||||
|
|
||||||
|
class CondAgent(agents.BaseAgent):
|
||||||
|
|
||||||
|
def step(self):
|
||||||
|
nonlocal done
|
||||||
|
times_started.append(self.now)
|
||||||
|
while True:
|
||||||
|
yield time.Cond(lambda agent: agent.model.schedule.time >= 10)
|
||||||
|
times_awakened.append(self.now)
|
||||||
|
if self.now >= 10:
|
||||||
|
break
|
||||||
|
done += 1
|
||||||
|
|
||||||
|
env = environment.Environment(agents=[{'agent_class': CondAgent}])
|
||||||
|
|
||||||
|
|
||||||
|
while env.schedule.time < 11:
|
||||||
|
env.step()
|
||||||
|
times.append(env.now)
|
||||||
|
assert env.schedule.time == 11
|
||||||
|
assert times_started == [0]
|
||||||
|
assert times_awakened == [10]
|
||||||
|
assert done == 1
|
||||||
|
# The first time will produce the Cond.
|
||||||
|
# Since there are no other agents, time will not advance, but the number
|
||||||
|
# of steps will.
|
||||||
|
assert env.schedule.steps == 12
|
||||||
|
assert len(times) == 12
|
||||||
|
|
||||||
|
while env.schedule.time < 12:
|
||||||
|
env.step()
|
||||||
|
times.append(env.now)
|
||||||
|
|
||||||
|
assert env.schedule.time == 12
|
||||||
|
assert times_started == [0, 11]
|
||||||
|
assert times_awakened == [10, 11]
|
||||||
|
assert done == 2
|
||||||
|
# Once more to yield the cond, another one to continue
|
||||||
|
assert env.schedule.steps == 14
|
||||||
|
assert len(times) == 14
|
||||||
Reference in New Issue
Block a user