summaryrefslogtreecommitdiff
path: root/cnn_v3/docs/HOWTO.md
diff options
context:
space:
mode:
Diffstat (limited to 'cnn_v3/docs/HOWTO.md')
-rw-r--r--cnn_v3/docs/HOWTO.md8
1 files changed, 6 insertions, 2 deletions
diff --git a/cnn_v3/docs/HOWTO.md b/cnn_v3/docs/HOWTO.md
index ff8793f..67f7931 100644
--- a/cnn_v3/docs/HOWTO.md
+++ b/cnn_v3/docs/HOWTO.md
@@ -371,7 +371,9 @@ cnn_v3_effect->set_film_params(
style_p0, style_p1);
```
-FiLM γ/β default to identity (γ=1, β=0) until `train_cnn_v3.py` produces a trained MLP.
+FiLM MLP weights are auto-loaded from `ASSET_WEIGHTS_CNN_V3_FILM_MLP` at construction.
+The MLP forward pass (`Linear(5→16)→ReLU→Linear(16→72)`) runs CPU-side in `set_film_params()`.
+Falls back to identity (γ=1, β=0) if no `.bin` is present.
---
@@ -407,6 +409,7 @@ Test vectors generated by `cnn_v3/training/gen_test_vectors.py` (PyTorch referen
| 7 — G-buffer visualizer (C++) | ✅ Done | GBufViewEffect, 36/36 tests pass |
| 8 — Architecture upgrade [8,16] | ✅ Done | enc_channels=[8,16], multi-scale loss, 16ch textures split into lo/hi pairs |
| 7 — Sample loader (web tool) | ✅ Done | "Load sample directory" in cnn_v3/tools/ |
+| 9 — Training bug fixes | ✅ Done | dec0 ReLU removed (output unblocked); FiLM MLP loaded at runtime |
---
@@ -428,7 +431,8 @@ The common snippet provides `get_w()` and `unpack_8ch()`.
- AvgPool 2×2 for downsampling (exact, deterministic)
- Nearest-neighbor for upsampling (integer `coord / 2`)
- Skip connections: channel concatenation (not add)
-- FiLM applied after conv+bias, before ReLU: `max(0, γ·x + β)`
+- FiLM applied after conv+bias, before ReLU: `max(0, γ·x + β)` (enc0/enc1/dec1)
+- dec0 final layer: FiLM then sigmoid directly — **no ReLU** (`sigmoid(γ·x + β)`)
- No batch norm at inference
- Weight layout: OIHW (out × in × kH × kW), biases after conv weights