diff options
| author | skal <pascal.massimino@gmail.com> | 2026-02-14 02:12:12 +0100 |
|---|---|---|
| committer | skal <pascal.massimino@gmail.com> | 2026-02-14 02:12:12 +0100 |
| commit | 043044ae7563c2f92760c428765e35b411da82ea (patch) | |
| tree | 0d640fec1517169d195747707b6c589c92fe7161 /doc/CNN_V2.md | |
| parent | 4d119a1b6a6f460ca6d5a8ef85176c45663fd40a (diff) | |
Replace hard clamp with sigmoid activation in CNN v2
Fixes training collapse where p1/p2 channels saturate due to gradient
blocking at clamp boundaries. Sigmoid provides smooth [0,1] mapping
with continuous gradients.
Changes:
- Layer 0: clamp(x, 0, 1) → sigmoid(x)
- Final layer: clamp(x, 0, 1) → sigmoid(x)
- Middle layers: ReLU unchanged (already stable)
Updated files:
- training/train_cnn_v2.py: PyTorch model activations
- workspaces/main/shaders/cnn_v2/cnn_v2_compute.wgsl: WGSL shader
- tools/cnn_v2_test/index.html: HTML validation tool
- doc/CNN_V2.md: Documentation
Validation:
- Build clean (no shader errors)
- 34/36 tests pass (2 unrelated script tests fail)
- 10-epoch training: loss 0.153 → 0.088 (good convergence)
- cnn_test processes images successfully
Breaking change: Old checkpoints trained with clamp() incompatible.
Retrain from scratch required.
handoff(Claude): CNN v2 sigmoid activation implemented and validated.
Diffstat (limited to 'doc/CNN_V2.md')
| -rw-r--r-- | doc/CNN_V2.md | 9 |
1 files changed, 8 insertions, 1 deletions
diff --git a/doc/CNN_V2.md b/doc/CNN_V2.md index abef606..fa00b32 100644 --- a/doc/CNN_V2.md +++ b/doc/CNN_V2.md @@ -18,11 +18,12 @@ CNN v2 extends the original CNN post-processing effect with parametric static fe - Bias integrated as static feature dimension - Storage buffer architecture (dynamic layer count) - Binary weight format v2 for runtime loading +- Sigmoid activation for layer 0 and final layer (smooth [0,1] mapping) **Status:** ✅ Complete. Training pipeline functional, validation tools ready, mip-level support integrated. **Known Issues:** -- ⚠️ **cnn_test output differs from HTML validation tool** - Visual discrepancy remains after fixing uv_y inversion and Layer 0 activation. Root cause under investigation. Both tools should produce identical output given same weights/input. +- ⚠️ **Old checkpoints incompatible** - Models trained with `clamp()` activation won't work correctly with new `sigmoid()` implementation. Retrain from scratch with latest code. **TODO:** - 8-bit quantization with QAT for 2× size reduction (~1.6 KB) @@ -106,6 +107,12 @@ Input RGBD → Static Features Compute → CNN Layers → Output RGBA - All layers: uniform 12D input, 4D output (ping-pong buffer) - Storage: `texture_storage_2d<rgba32uint>` (4 channels as 2×f16 pairs) +**Activation Functions:** +- Layer 0 & final layer: `sigmoid(x)` for smooth [0,1] mapping +- Middle layers: `ReLU` (max(0, x)) +- Rationale: Sigmoid prevents gradient blocking at boundaries, enabling better convergence +- Breaking change: Models trained with `clamp(x, 0, 1)` are incompatible, retrain required + --- ## Static Features (7D + 1 bias) |
