diff options
| author | skal <pascal.massimino@gmail.com> | 2026-03-21 08:55:20 +0100 |
|---|---|---|
| committer | skal <pascal.massimino@gmail.com> | 2026-03-21 08:55:20 +0100 |
| commit | b8b2707d7b3f2eaabd896395f2625e13405a24e2 (patch) | |
| tree | fbfe5aafec4646458f5e1440a60978a02b3fbaad /TODO.md | |
| parent | fe008df92f7a68d81c9bedb4328da7001e0775f0 (diff) | |
docs: session handoff — CNN v3 Phase 4 complete
- TODO.md: mark Phase 4 done, add FiLM MLP training details (blocked on
train_cnn_v3.py), clarify what 'real' set_film_params() requires
- COMPLETED.md: archive Phase 4 with alignment fix note (vec3u→64/96 bytes)
handoff(Gemini): next up CNN v3 Phase 5 (parity validation) or train_cnn_v3.py
Diffstat (limited to 'TODO.md')
| -rw-r--r-- | TODO.md | 19 |
1 files changed, 13 insertions, 6 deletions
@@ -69,16 +69,23 @@ PyTorch / HTML WebGPU / C++ WebGPU. **Design:** `cnn_v3/docs/CNN_V3.md` **Phases:** -1. ✅ G-buffer: `GBufferEffect` integrated. Assets, CMake, demo_effects.h, test all wired. 35/35 tests pass. - - NodeTypes: `GBUF_ALBEDO`, `GBUF_DEPTH32`, `GBUF_R8`, `GBUF_RGBA32UINT` - - Shaders: `cnn_v3/shaders/gbuf_raster.wgsl` (ShaderComposer), `gbuf_pack.wgsl` - - SDF/shadow passes TODO (placeholder: shadow=1, transp=0) - - Howto: `cnn_v3/docs/HOWTO.md` +1. ✅ G-buffer: `GBufferEffect` integrated. SDF/shadow placeholder (shadow=1, transp=0). 2. ✅ Training infrastructure: `blender_export.py`, `pack_blender_sample.py`, `pack_photo_sample.py` 3. ✅ WGSL shaders: cnn_v3_common (snippet), enc0, enc1, bottleneck, dec1, dec0 -4. ✅ C++ CNNv3Effect + FiLM uniform upload +4. ✅ C++ `CNNv3Effect`: 5 compute passes, FiLM uniform upload, `set_film_params()` API + - Params alignment fix: WGSL `vec3u` align=16 → C++ structs 64/96 bytes + - Weight offsets as explicit formulas (e.g. `20*4*9+4`) + - FiLM γ/β: identity defaults; real values require trained MLP (see below) 5. Parity validation (test vectors, ≤1/255 per pixel) +**FiLM MLP training** (blocks meaningful Phase 4 output): +- Needs `cnn_v3/training/train_cnn_v3.py` — not yet written +- MLP: `Linear(5→16) → ReLU → Linear(16→48)` trained jointly with U-Net +- Input: `[beat_phase, beat_time/8, audio_intensity, style_p0, style_p1]` +- Output: γ/β for enc0(4ch) + enc1(8ch) + dec1(4ch) + dec0(4ch) = 40 floats +- Trained weights (~3 KB f16) stored in `.bin` after conv weights; loaded at runtime +- See `cnn_v3/docs/CNN_V3.md` §5 for full MLP spec and §11 for training pipeline plan + ## Future: CNN v2 8-bit Quantization Reduce weights from f16 (~3.2 KB) to i8 (~1.6 KB). |
