3D Generation
The 3D Generation module converts 2D instance segmentation masks into 3D mesh models.
Overview
This module implements various techniques to reconstruct 3D models from 2D segmentation masks, including depth estimation, mesh generation, and texture mapping.
Techniques
1. Depth Estimation
Estimate depth maps from 2D masks using various methods:
- MiDaS: Monocular depth estimation
- Shape-from-Silhouette: Classical computer vision approach
- Learning-based: Neural network depth prediction
2. Mesh Generation
Convert depth maps to 3D meshes:
- Marching Cubes: Isosurface extraction
- Poisson Surface Reconstruction: Smooth mesh generation
- Delaunay Triangulation: Point cloud to mesh
3. Texture Mapping
Apply textures to 3D models:
- UV Mapping: Project 2D image onto 3D surface
- Color Transfer: Extract colors from source images
- Procedural Textures: Generate synthetic textures
Usage
Basic 3D Generation
from src.3d_generation import ModelGenerator
generator = ModelGenerator(method='midas')
# Generate 3D model from mask
model_3d = generator.generate_from_mask(mask)
# Save to file
generator.save_model(model_3d, 'data/3d_models/output.obj')
Advanced Options
# Configure generation parameters
generator = ModelGenerator(
method='midas',
resolution=512,
smoothing=True,
texture_mapping=True
)
# Generate with custom settings
model_3d = generator.generate_from_mask(
mask,
depth_scale=1.5,
mesh_simplification=0.8
)
Batch Processing
# Process multiple masks
masks = [mask1, mask2, mask3]
models = generator.batch_generate(masks, num_workers=4)
for i, model in enumerate(models):
generator.save_model(model, f'data/3d_models/model_{i}.obj')
Depth Estimation Methods
MiDaS
Pros: High-quality depth estimation Cons: Requires GPU, slower
Shape-from-Silhouette
Pros: Fast, CPU-friendly Cons: Less accurate depth
Mesh Formats
Supported output formats:
- OBJ: Wavefront OBJ (with MTL for materials)
- PLY: Polygon File Format
- STL: Stereolithography (3D printing)
- GLTF: GL Transmission Format (web-ready)
# Export in different formats
generator.save_model(model, 'output.obj', format='obj')
generator.save_model(model, 'output.ply', format='ply')
generator.save_model(model, 'output.stl', format='stl')
Optimization
Mesh Simplification
Reduce polygon count while preserving shape:
Smoothing
Apply smoothing algorithms:
Configuration
Configure 3D generation via configs/3d_generation.yaml:
3d_generation:
# Depth estimation
depth_estimator: "midas"
depth_model: "DPT_Large"
# Mesh generation
mesh_resolution: 256
marching_cubes_threshold: 0.5
# Optimization
simplification_ratio: 0.8
smoothing_iterations: 5
# Texture
texture_resolution: 1024
uv_unwrap_method: "smart_project"
Performance Tips
- Use GPU for depth estimation when available
- Batch process multiple masks together
- Cache depth maps to avoid recomputation
- Adjust resolution based on your needs (lower = faster)
API Reference
For detailed API documentation, see the 3D Generation API Reference.
Research and State-of-the-Art Methods
For comprehensive information on the latest research in 3D reconstruction, including:
- Neural Radiance Fields (NeRF) methods
- 3D Gaussian Splatting techniques
- Diffusion-based approaches
- Monocular depth estimation models
See the Research References page for papers, code repositories, and detailed comparisons of state-of-the-art methods.
Next Steps
After generating 3D models, move on to Synthetic Rendering to create synthetic images.