Genesis Evolutionary Approach: Technical Deep Dive¶
Table of Contents¶
- Overview
- Island-Based Evolution Architecture
- The Evolution Loop
- Parent Selection Strategies
- Inspiration Selection & Context Building
- Mutation Operators: Patch Types
- Migration Strategies
- LLM Integration & Dynamic Model Selection
- Archive Management & Elite Preservation
- Novelty Detection with Embeddings
- Meta-Recommendations
- Configuration Parameters Reference
Overview¶
Genesis implements a LLM-driven evolutionary algorithm that optimizes code through iterative generation and selection. Unlike traditional genetic algorithms that operate on bit strings or numerical parameters, Genesis evolves complete programs by prompting large language models to generate patches and improvements.
Core Principles¶
- Population-Based Search: Multiple candidate programs (the population) evolve simultaneously
- Island Model: Populations are divided into isolated subpopulations (islands) that evolve independently with periodic migration
- Fitness-Guided Selection: Programs are evaluated on test cases, and performance metrics guide parent selection
- LLM-Driven Mutation: Instead of random bit flips, mutations are intelligent code modifications generated by LLMs
- Archive of Elites: Best-performing programs are preserved in an archive to guide future generations
The Big Picture¶
┌─────────────────────────────────────────────────────────────┐
│ EVOLUTION LOOP │
│ │
│ Generation N │
│ ┌──────────┐ ┌──────────┐ ┌──────────┐ │
│ │ Island 1 │ │ Island 2 │ │ Island 3 │ │
│ │ Programs │ │ Programs │ │ Programs │ │
│ └────┬─────┘ └────┬─────┘ └────┬─────┘ │
│ │ │ │ │
│ ├─[Select Parent]┤ │ │
│ │ │ │ │
│ ├─[Get Inspirations from Archive & Top-K] │
│ │ │ │ │
│ ├─[LLM Generates Patch]───────────┤ │
│ │ │ │ │
│ ├─[Evaluate New Program]──────────┤ │
│ │ │ │ │
│ ├─[Add to Island & Maybe Archive]─┤ │
│ │ │ │ │
│ Generation N+1 │
│ ┌──────────┐ ┌──────────┐ ┌──────────┐ │
│ │ Island 1 │◄────┤Migration├────►│ Island 3 │ │
│ │ Programs │ │ Every M │ │ Programs │ │
│ └──────────┘ │ Gens │ └──────────┘ │
│ └──────────┘ │
└─────────────────────────────────────────────────────────────┘
Island-Based Evolution Architecture¶
Why Islands?¶
The island model provides several benefits:
- Diversity Maintenance: Isolated populations explore different regions of the search space
- Parallelization: Islands can evolve independently, enabling parallel processing
- Controlled Exploration/Exploitation: Migration balances local optimization with global diversity
Island Assignment¶
DefaultIslandAssignmentStrategy (genesis/database/islands.py:82-127)
When a new program is created, it inherits its parent's island:
def assign_island(self, parent_id: str, generation: int) -> Optional[int]:
parent = self.db.get_program_by_id(parent_id)
return parent.island_idx if parent else None
Special Case: Generation 0 initialization uses CopyInitialProgramIslandStrategy to replicate the seed program across all islands:
# If num_islands=5, the initial program is copied 5 times
# Each copy is assigned to a different island (0, 1, 2, 3, 4)
for island_idx in range(num_islands):
copy_program(initial_program, island_idx)
This ensures each island starts with identical seeds but can diverge over generations.
The Evolution Loop¶
Main Workflow (genesis/core/runner.py)¶
class EvolutionRunner:
def run(self):
# 1. Initialize generation 0 programs (seed program(s))
self.initialize_generation_zero()
for generation in range(self.config.num_generations):
# 2. Sample parents (one per island or globally)
parents = self.sample_parents(generation)
for parent in parents:
# 3. Select inspirations (context for LLM)
archive_inspirations = self.inspiration_selector.sample_context(
parent, n=self.config.num_archive_inspirations
)
top_k_inspirations = self.top_k_selector.get_top_k(
k=self.config.num_top_k_inspirations
)
# 4. Get meta-recommendations if interval reached
meta_rec = self.get_meta_recommendations(generation)
# 5. Generate prompt with sampler
prompt_data = self.prompt_sampler.sample(
parent=parent,
archive_inspirations=archive_inspirations,
top_k_inspirations=top_k_inspirations,
meta_recommendations=meta_rec
)
# 6. LLM generates patch
response = self.llm_client.generate(
messages=prompt_data.messages,
model=self.select_model(parent)
)
# 7. Apply patch to get new program code
new_code = self.apply_patch(parent.code, response.patch)
# 8. Evaluate new program
metrics, correct, error = self.evaluator.evaluate(new_code)
# 9. Compute embedding for novelty
embedding = self.embedding_client.embed(new_code)
# 10. Store in database
program_id = self.db.add_program(
code=new_code,
parent_id=parent.id,
generation=generation + 1,
island_idx=self.island_strategy.assign_island(parent.id, generation + 1),
correct=correct,
metrics=metrics,
embedding=embedding,
metadata={
'patch_type': prompt_data.patch_type,
'model': response.model,
'inspirations': [i.id for i in archive_inspirations]
}
)
# 11. Maybe add to archive if elite
if correct and self.archive_manager.is_elite(program_id):
self.archive_manager.add_to_archive(program_id)
# 12. Perform migration if interval reached
if generation % self.config.migration_interval == 0:
self.migration_strategy.perform_migration(generation)
Parent Selection Strategies¶
Parent selection determines which programs get to "reproduce" by having the LLM generate variations of them. Genesis supports four strategies:
1. Power Law Sampling (Default)¶
Location: genesis/database/parents.py:102-272
Key Idea: Higher-performing programs have higher selection probability, but with diminishing returns (power law distribution).
Algorithm:
def sample_parent(self):
if random() < exploitation_ratio:
# Sample from archive (elites only)
candidates = get_archived_programs(
island=current_island if enforce_island_separation else None
)
sampled_idx = sample_with_powerlaw(candidates, alpha=exploitation_alpha)
return candidates[sampled_idx]
else:
# Sample from all correct programs
candidates = get_correct_programs(
island=current_island if enforce_island_separation else None
)
sampled_idx = sample_with_powerlaw(candidates, alpha=exploration_alpha)
return candidates[sampled_idx]
Power Law Sampling Function:
def sample_with_powerlaw(programs: List, alpha: float) -> int:
"""
Probability of selecting program i (ranked by score):
P(i) ∝ (i + 1)^(-alpha)
alpha = 0: Uniform sampling
alpha > 1: Strong bias toward top performers
alpha < 0: Bias toward lower performers (exploration)
"""
n = len(programs)
weights = [(i + 1) ** (-alpha) for i in range(n)]
weights = np.array(weights) / sum(weights)
return np.random.choice(n, p=weights)
Key Parameters:
- exploitation_ratio (default: 0.5): Probability of sampling from archive vs all programs
- exploitation_alpha (default: 1.0): Power law exponent for archive sampling
- exploration_alpha (default: 0.5): Power law exponent for exploration sampling
- enforce_island_separation (default: True): Restrict sampling to current island
2. Weighted Sampling¶
Location: genesis/database/parents.py:274-455
Key Idea: Combines performance-based weighting with a novelty bonus to encourage diverse exploration.
Algorithm:
def sample_parent(self):
candidates = get_correct_programs(island=current_island)
# For each candidate i, compute weight w_i = s_i * h_i
weights = []
for candidate in candidates:
# s_i: Performance score (sigmoid-scaled)
performance_delta = candidate.score - baseline_score
s_i = stable_sigmoid(lambda_param * performance_delta / scale_factor)
# h_i: Novelty bonus (fewer children = higher weight)
num_children = count_children(candidate)
h_i = 1 / (1 + num_children)
w_i = s_i * h_i
weights.append(w_i)
# Normalize and sample
probs = weights / sum(weights)
return np.random.choice(candidates, p=probs)
Sigmoid Function (numerically stable):
def stable_sigmoid(x: float) -> float:
"""
σ(x) = 1 / (1 + e^(-x))
Numerically stable implementation:
- For x >= 0: σ(x) = 1 / (1 + e^(-x))
- For x < 0: σ(x) = e^x / (1 + e^x)
"""
if x >= 0:
return 1 / (1 + np.exp(-x))
else:
exp_x = np.exp(x)
return exp_x / (1 + exp_x)
Key Parameters:
- lambda_param (default: 1.0): Controls steepness of sigmoid (higher = more selective)
- scale_factor: Normalizes performance differences
- baseline_score: Reference score for computing deltas
Effect of Novelty Bonus: - A program with 0 children: h = 1.0 (full weight) - A program with 1 child: h = 0.5 (half weight) - A program with 9 children: h = 0.1 (reduced to 10%)
This encourages sampling underexplored programs even if they aren't the absolute best performers.
3. Beam Search Sampling¶
Location: genesis/database/parents.py:458-545
Key Idea: Lock onto a single parent for multiple generations to deeply explore its neighborhood.
Algorithm:
class BeamSearchSamplingStrategy:
def __init__(self, num_beams: int):
self.num_beams = num_beams
self.beam_parent = None
self.beam_count = 0
def sample_parent(self):
if self.beam_count >= self.num_beams or self.beam_parent is None:
# Select new beam parent (best program from archive or all correct)
candidates = get_archived_programs() or get_correct_programs()
self.beam_parent = max(candidates, key=lambda p: p.score)
self.beam_count = 0
self.beam_count += 1
return self.beam_parent
Use Case: Intensive local search when you know a good program exists and want to refine it.
Key Parameters:
- num_beams: Number of children to generate from each beam parent
4. Best-of-N Sampling¶
Location: genesis/database/parents.py:548-612
Key Idea: Always use the generation 0 (initial) program as the parent.
Algorithm:
Use Case: When you want all variations to be direct modifications of the seed program (no chaining).
Inspiration Selection & Context Building¶
To help the LLM generate better patches, Genesis provides context in the form of high-performing programs called "inspirations."
Two Types of Inspirations¶
- Archive Inspirations: Elite programs from the archive
- Top-K Inspirations: Current best performers globally or per-island
Archive Inspiration Selector¶
Location: genesis/database/inspirations.py:38-140
Algorithm:
def sample_context(self, parent: Program, n: int) -> List[Program]:
inspirations = []
# 1. Always include the best program (if correct and not the parent)
best_program = get_best_program()
if best_program and best_program.correct and best_program.id != parent.id:
inspirations.append(best_program)
# 2. Sample elites from parent's island
island_elites = get_archived_programs(island=parent.island_idx)
num_island_elites = int(elite_selection_ratio * n)
sampled_island_elites = random.sample(island_elites, min(num_island_elites, len(island_elites)))
inspirations.extend(sampled_island_elites)
# 3. Fill remaining slots with random correct programs from island
remaining = n - len(inspirations)
island_programs = get_correct_programs(island=parent.island_idx)
inspirations.extend(random.sample(island_programs, min(remaining, len(island_programs))))
# 4. Fallback: If not enough programs in island and !enforce_island_separation
if len(inspirations) < n and not enforce_island_separation:
all_programs = get_correct_programs() # Global pool
inspirations.extend(random.sample(all_programs, n - len(inspirations)))
return inspirations[:n]
Key Parameters:
- num_archive_inspirations (default: 3): Number of archive inspirations to include
- elite_selection_ratio (default: 0.5): Fraction of inspirations from elites vs random correct
- enforce_island_separation (default: True): Restrict to current island
Top-K Inspiration Selector¶
Location: genesis/database/inspirations.py:143+
Algorithm:
def get_top_k(self, k: int, island: Optional[int] = None) -> List[Program]:
candidates = get_correct_programs(island=island)
candidates.sort(key=lambda p: p.combined_score, reverse=True)
return candidates[:k]
Key Parameters:
- num_top_k_inspirations (default: 2): Number of top performers to include
Context Message Construction¶
The inspirations are formatted into a message for the LLM:
def construct_eval_history_msg(inspirations: List[Program], language: str) -> str:
msg = "Here are some high-performing programs for reference:\n\n"
for i, prog in enumerate(inspirations):
msg += f"Program {i+1}:\n"
msg += f"```{language}\n{prog.code}\n```\n"
msg += f"Score: {prog.combined_score:.4f}\n"
if include_text_feedback and prog.metadata.get('text_feedback'):
msg += f"Feedback: {prog.metadata['text_feedback']}\n"
msg += "\n"
return msg
Mutation Operators: Patch Types¶
Genesis supports three patch types that define how the LLM modifies programs:
1. Diff Patches (Incremental Changes)¶
Prompt Style: "Here is a program. Suggest a small improvement as a unified diff."
Example:
--- a/program.py
+++ b/program.py
@@ -10,7 +10,7 @@
for i in range(n):
- result += slow_operation(i)
+ result += fast_operation(i) # Optimized
return result
Use Case: Fine-tuning, bug fixes, small optimizations
2. Full Rewrites¶
Prompt Style: "Here is a program. Rewrite it completely to improve performance."
Example:
# LLM generates a completely new program from scratch
def optimized_algorithm(data):
# Entirely new implementation using different approach
return vectorized_numpy_solution(data)
Use Case: Exploring radically different algorithms, major refactoring
3. Cross (Crossover)¶
Prompt Style: "Here are two programs. Combine their best features into a new program."
Example:
Program A: Fast but inaccurate
Program B: Slow but accurate
Crossover: Fast AND accurate (combines A's algorithm with B's validation)
Use Case: Combining complementary strengths from different lineages
Patch Type Sampling¶
Location: genesis/core/sampler.py:45-89
def sample(self, parent, archive_inspirations, top_k_inspirations):
# Sample patch type based on configured probabilities
patch_type = np.random.choice(
self.patch_types, # e.g., ["diff", "full", "cross"]
p=self.patch_type_probs # e.g., [0.7, 0.2, 0.1]
)
if patch_type == "cross":
# Need a second parent for crossover
cross_parent = self.parent_selector.sample_parent()
return {
'patch_type': patch_type,
'parent': parent,
'cross_parent': cross_parent if patch_type == "cross" else None,
'archive_inspirations': archive_inspirations,
'top_k_inspirations': top_k_inspirations
}
Configuration:
patch_types: ["diff", "full", "cross"]
patch_type_probs: [0.7, 0.2, 0.1] # 70% diff, 20% full, 10% cross
Migration Strategies¶
Migration allows programs to move between islands, promoting diversity while preserving local optima.
Elitist Migration Strategy¶
Location: genesis/database/islands.py:216-288
Key Idea: Periodically move programs between islands, but protect elites from being replaced.
Algorithm:
def perform_migration(self, generation: int):
if generation % migration_interval != 0:
return False # Only migrate every N generations
for island_idx in range(num_islands):
# 1. Identify migrants (correct programs that can move)
migrants = get_correct_programs(
island=island_idx,
exclude_generation_0=True # Never migrate seed programs
)
# 2. If island_elitism enabled, protect best program
if island_elitism:
best_program = max(migrants, key=lambda p: p.combined_score)
migrants = [m for m in migrants if m.id != best_program.id]
# 3. Select random subset to migrate (based on migration_rate)
num_to_migrate = int(len(migrants) * migration_rate)
selected_migrants = random.sample(migrants, num_to_migrate)
# 4. Assign new islands (ring topology: island i → island (i+1) % num_islands)
target_island = (island_idx + 1) % num_islands
for migrant in selected_migrants:
self.db.update_program_island(migrant.id, target_island)
logging.info(f"Migrated {migrant.id} from island {island_idx} to {target_island}")
return True
Key Parameters:
- migration_interval (default: 5): Migrate every N generations
- migration_rate (default: 0.1): Fraction of programs to migrate (10%)
- island_elitism (default: True): Protect best program in each island
Migration Topology: Ring migration (island 0 → 1 → 2 → 3 → 0)
Effect: - Introduces diversity by exposing islands to successful strategies from neighbors - Maintains local optimization by keeping most programs on their home island - Protects elites to prevent losing hard-won discoveries
LLM Integration & Dynamic Model Selection¶
Multiple LLM Support¶
Genesis can use multiple LLM models simultaneously:
Dynamic Model Selection with Multi-Armed Bandits¶
Location: genesis/core/runner.py:189-242
Key Idea: Automatically learn which models perform best and allocate more generations to them.
Algorithm: Asymmetric Upper Confidence Bound (UCB)
class AsymmetricUCB:
def __init__(self, models: List[str], c: float = 1.0):
self.models = models
self.c = c # Exploration parameter
self.counts = {m: 0 for m in models} # Times model was selected
self.rewards = {m: [] for m in models} # Correctness outcomes
def select_model(self) -> str:
if any(self.counts[m] == 0 for m in self.models):
# Initially, try each model at least once
return [m for m in self.models if self.counts[m] == 0][0]
# Compute UCB score for each model
total_trials = sum(self.counts.values())
ucb_scores = {}
for model in self.models:
# Mean reward (success rate)
mean_reward = np.mean(self.rewards[model])
# Exploration bonus
exploration = self.c * np.sqrt(np.log(total_trials) / self.counts[model])
ucb_scores[model] = mean_reward + exploration
# Select model with highest UCB score
return max(ucb_scores, key=ucb_scores.get)
def update(self, model: str, reward: float):
"""Reward = 1 if program was correct, 0 otherwise"""
self.counts[model] += 1
self.rewards[model].append(reward)
Configuration:
llm_dynamic_selection: "AsymmetricUCB" # or "UCB", "EpsilonGreedy", etc.
llm_dynamic_selection_c: 1.0 # Exploration parameter
Posterior Tracking: Genesis records model_posteriors in program metadata:
model_posteriors = {
"azure-gpt-4.1-mini": 0.45, # 45% of recent correct programs
"azure-gpt-4.1": 0.35,
"gemini-flash": 0.20
}
This data is visualized in the WebUI under "Model Posteriors" view.
Archive Management & Elite Preservation¶
The archive is a persistent store of high-performing programs used for inspiration and parent selection.
Archive Strategy¶
Location: genesis/database/archive.py
Default Strategy: Archive programs that meet correctness and score thresholds:
def should_archive(self, program: Program) -> bool:
if not program.correct:
return False # Only archive correct programs
# Archive if program is in top N globally
all_correct = get_correct_programs()
all_correct.sort(key=lambda p: p.combined_score, reverse=True)
return program in all_correct[:self.config.archive_size]
Alternative: DiversityArchive (archives based on embedding similarity to promote diversity)
def should_archive(self, program: Program) -> bool:
if not program.correct:
return False
# Compute similarity to existing archive members
archive_programs = get_archived_programs()
similarities = [
cosine_similarity(program.embedding, archived.embedding)
for archived in archive_programs
]
max_similarity = max(similarities) if similarities else 0
# Archive if sufficiently different (novelty threshold)
return max_similarity < self.config.novelty_threshold
Key Parameters:
- archive_size (default: 50): Maximum archive size
- novelty_threshold (default: 0.95): Minimum dissimilarity for diversity archive
Novelty Detection with Embeddings¶
Genesis computes code embeddings to detect semantically similar programs.
Embedding Model¶
Default: text-embedding-3-small (OpenAI)
Process:
def compute_embedding(self, code: str) -> List[float]:
response = openai.Embedding.create(
model="text-embedding-3-small",
input=code
)
return response['data'][0]['embedding'] # 1536-dimensional vector
Novelty Bonus (Weighted Sampling)¶
Location: genesis/database/parents.py:331-389
def compute_novelty_bonus(program: Program) -> float:
"""
Novelty bonus based on number of children:
h_i = 1 / (1 + num_children_i)
Unexplored programs get higher weight.
"""
num_children = len(get_children(program.id))
return 1 / (1 + num_children)
Embedding Visualization¶
The WebUI displays an embedding similarity heatmap under "Embeddings" view:
High similarity (red) = Programs have similar implementations
Low similarity (blue) = Programs are semantically different
This helps visualize diversity and detect when evolution is converging.
Meta-Recommendations¶
Meta-recommendations are high-level strategic suggestions generated by a separate LLM that observes the evolution process.
When to Use¶
Configuration:
Meta-Recommendation Process¶
Location: genesis/core/runner.py:312-378
Algorithm:
def generate_meta_recommendation(self, generation: int) -> str:
if generation % self.config.meta_rec_interval != 0:
return None
# 1. Gather evolution statistics
stats = {
'best_score': get_best_program().combined_score,
'best_score_history': get_score_history(),
'diversity': compute_diversity_metric(),
'stagnation_count': count_stagnant_generations(),
'archive_size': len(get_archived_programs()),
'island_distribution': get_programs_per_island()
}
# 2. Prompt meta-LLM with statistics
prompt = f"""
You are observing an evolutionary optimization process at generation {generation}.
Statistics:
- Best score: {stats['best_score']}
- Score trend: {stats['best_score_history'][-10:]}
- Diversity: {stats['diversity']}
- Stagnation: {stats['stagnation_count']} generations without improvement
Provide a concise recommendation (2-3 sentences) on how to improve the search.
Focus on: exploration vs exploitation, diversity, island migration, patch types.
"""
meta_recommendation = meta_llm.generate(prompt)
# 3. Include in next generation's prompts
return meta_recommendation
Example Meta-Recommendation:
The evolution has stagnated for 5 generations with best score plateaued at 0.87.
Recommendation: Increase exploration by raising exploration_alpha to 0.3 and
trying more full rewrites (increase full patch type probability to 0.3).
Consider triggering migration to inject cross-island diversity.
Effect: The meta-recommendation is appended to the LLM prompt for the next generation, potentially influencing the direction of mutations.
Configuration Parameters Reference¶
Evolution Configuration¶
| Parameter | Default | Description |
|---|---|---|
num_generations |
10 | Number of evolutionary generations |
max_parallel_jobs |
2 | Maximum concurrent evaluations |
patch_types |
["diff"] |
Mutation operators: diff, full, cross |
patch_type_probs |
[1.0] |
Probabilities for each patch type |
Island Configuration¶
| Parameter | Default | Description |
|---|---|---|
num_islands |
1 | Number of isolated subpopulations |
migration_interval |
5 | Migrate every N generations |
migration_rate |
0.1 | Fraction of programs to migrate |
island_elitism |
True | Protect best program during migration |
enforce_island_separation |
True | Restrict sampling to current island |
Parent Selection (Power Law)¶
| Parameter | Default | Description |
|---|---|---|
parent_selection_strategy |
"power_law" |
power_law, weighted, beam_search, best_of_n |
exploitation_ratio |
0.5 | Probability of sampling from archive |
exploitation_alpha |
1.0 | Power law exponent for archive (higher = more selective) |
exploration_alpha |
0.5 | Power law exponent for exploration |
Parent Selection (Weighted)¶
| Parameter | Default | Description |
|---|---|---|
lambda_param |
1.0 | Sigmoid steepness (higher = more selective) |
Parent Selection (Beam Search)¶
| Parameter | Default | Description |
|---|---|---|
num_beams |
5 | Children per beam parent before switching |
Inspiration Selection¶
| Parameter | Default | Description |
|---|---|---|
num_archive_inspirations |
3 | Archive inspirations per generation |
num_top_k_inspirations |
2 | Top-K inspirations per generation |
elite_selection_ratio |
0.5 | Fraction of inspirations from elites |
LLM Configuration¶
| Parameter | Default | Description |
|---|---|---|
llm_models |
["gpt-4-mini"] |
List of LLM models to use |
llm_dynamic_selection |
None | Bandit algorithm: AsymmetricUCB, UCB, EpsilonGreedy |
llm_dynamic_selection_c |
1.0 | UCB exploration parameter |
embedding_model |
"text-embedding-3-small" |
Model for code embeddings |
use_text_feedback |
False | Include evaluation feedback in prompts |
Archive Configuration¶
| Parameter | Default | Description |
|---|---|---|
archive_size |
50 | Maximum programs in archive |
novelty_threshold |
0.95 | Similarity threshold for diversity archive |
Meta-Recommendations¶
| Parameter | Default | Description |
|---|---|---|
meta_rec_interval |
None | Generate meta-rec every N gens (None = disabled) |
meta_llm_model |
"gpt-4" |
Model for meta-recommendations |
Summary: The Full Picture¶
Genesis orchestrates a sophisticated evolutionary search:
- Initialize populations on multiple islands with seed programs
- Each generation:
- Select parents using fitness-guided sampling (power law, weighted, beam search)
- Gather inspirations from archive and top-K performers
- Generate mutations by prompting LLMs with parent + inspirations + meta-recommendations
- Evaluate new programs on test cases
- Archive elites for future reference
- Compute embeddings for novelty tracking
- Periodically migrate programs between islands to balance diversity
- Dynamically adjust model selection based on success rates
- Iterate until convergence or generation limit
The result is a powerful optimization framework that leverages LLM reasoning to navigate complex code spaces efficiently.
Further Reading¶
- Getting Started Guide - Installation and first experiments
- Configuration Guide - Detailed parameter tuning
- Developer Guide - Extending Genesis with custom strategies
- WebUI Guide - Visualizing evolutionary runs
- Research Papers - Theoretical foundations and related work