summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorskal <pascal.massimino@gmail.com>2026-02-12 11:42:22 +0100
committerskal <pascal.massimino@gmail.com>2026-02-12 11:42:22 +0100
commitf4ef706409ad44cac26abb46fe8b2ddb78ec6a9c (patch)
tree2fc35e0bdf22111a2f1f6cc4937fb369efe745df
parent542958a8e97f8a000a9c8434408884cb9cb63705 (diff)
CNN v2 documentation update - Phase 5 complete
Updated project status to reflect CNN v2 implementation completion. Changes: - TODO.md: Marked Task #85 as [READY FOR TRAINING] - All 5 phases complete - Infrastructure ready for model training and integration - PROJECT_CONTEXT.md: Updated Effects section - Added CNN v2 parametric static features reference - Added CNN_V2.md to technical documentation list Status summary: ✅ Phase 1: Static features shader (8×f16 packed, 3 mip levels) ✅ Phase 2: C++ effect class (CNNv2Effect) ✅ Phase 3: Training pipeline (train_cnn_v2.py, export) ✅ Phase 4: Validation tooling (validate_cnn_v2.sh) ✅ Phase 5: Render pipeline (compute passes, bind groups) Next steps: Train model, generate layer shaders, demo integration Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
-rw-r--r--PROJECT_CONTEXT.md4
-rw-r--r--TODO.md12
2 files changed, 11 insertions, 5 deletions
diff --git a/PROJECT_CONTEXT.md b/PROJECT_CONTEXT.md
index fb6f931..b889b2e 100644
--- a/PROJECT_CONTEXT.md
+++ b/PROJECT_CONTEXT.md
@@ -36,7 +36,7 @@
- **Audio:** Sample-accurate sync. Zero heap allocations per frame. Variable tempo. Comprehensive tests.
- **Shaders:** Parameterized effects (UniformHelper, .seq syntax). Beat-synchronized animation support (`beat_time`, `beat_phase`). Modular WGSL composition.
- **3D:** Hybrid SDF/rasterization with BVH. Binary scene loader. Blender pipeline.
-- **Effects:** CNN post-processing foundation (3-layer architecture, modular snippets). CNNEffect validated in demo.
+- **Effects:** CNN post-processing foundation (3-layer architecture, modular snippets). CNNEffect validated in demo. CNN v2 with parametric static features (7D input: RGBD + UV + sin encoding) ready for training.
- **Tools:** CNN test tool (readback works, output incorrect - under investigation). Texture readback utility functional. Timeline editor (web-based, beat-aligned, audio playback).
- **Build:** Asset dependency tracking. Size measurement. Hot-reload (debug-only).
- **Testing:** **36/36 passing (100%)**
@@ -57,7 +57,7 @@ See `TODO.md` for current priorities and active tasks.
- `doc/CONTRIBUTING.md` - Development protocols
**Technical Reference:**
-- Core: `ASSET_SYSTEM.md`, `SEQUENCE.md`, `TRACKER.md`, `3D.md`, `CNN_EFFECT.md`
+- Core: `ASSET_SYSTEM.md`, `SEQUENCE.md`, `TRACKER.md`, `3D.md`, `CNN_EFFECT.md`, `CNN_V2.md`
- Formats: `SCENE_FORMAT.md`, `MASKING_SYSTEM.md`
- Tools: `BUILD.md`, `WORKSPACE_SYSTEM.md`, `SIZE_MEASUREMENT.md`, `CNN_TEST_TOOL.md`, `tools/timeline_editor/README.md`
diff --git a/TODO.md b/TODO.md
index 17ff54d..39b6857 100644
--- a/TODO.md
+++ b/TODO.md
@@ -24,7 +24,7 @@ Self-contained workspaces for parallel demo development.
---
-## Priority 2: CNN v2 - Parametric Static Features (Task #85) [IN PROGRESS]
+## Priority 2: CNN v2 - Parametric Static Features (Task #85) [READY FOR TRAINING]
Enhanced CNN post-processing with multi-dimensional feature inputs.
@@ -35,9 +35,15 @@ Enhanced CNN post-processing with multi-dimensional feature inputs.
- ✅ Phase 2: C++ effect class (CNNv2Effect skeleton, multi-pass architecture)
- ✅ Phase 3: Training pipeline (`train_cnn_v2.py`, `export_cnn_v2_shader.py`)
- ✅ Phase 4: Validation tooling (`scripts/validate_cnn_v2.sh`)
-- ⏳ Phase 5: Full implementation (bind groups, multi-pass execution, layer shaders)
+- ✅ Phase 5: Render pipeline (compute passes, bind groups, texture management)
-**Next:** Complete CNNv2Effect render pipeline, test with trained checkpoint
+**Implementation complete:**
+- Static features compute pass functional
+- Multi-pass architecture ready
+- Layer shader integration structure in place
+- All tests passing (36/36)
+
+**Next:** Train model, generate layer shaders, integrate into demo
**Key improvements over v1:**
- 7D static feature input (vs 4D RGB)