summaryrefslogtreecommitdiff
path: root/doc
diff options
context:
space:
mode:
authorskal <pascal.massimino@gmail.com>2026-02-14 02:13:54 +0100
committerskal <pascal.massimino@gmail.com>2026-02-14 02:13:54 +0100
commitd7f75a4f5b4e67e5fb1bac4efdd740ac7099e884 (patch)
treef92a402a1536017d02cdf35ba359f0f56e43ea8b /doc
parent043044ae7563c2f92760c428765e35b411da82ea (diff)
Update docs: CNN v2 sigmoid activation summary
- PROJECT_CONTEXT.md: Updated Effects section (sigmoid, stable training) - TODO.md: Added sigmoid activation to CNN v2 status - CNN_V2.md: Streamlined (removed outdated issues, updated code examples) handoff(Claude): Documentation synchronized with sigmoid implementation.
Diffstat (limited to 'doc')
-rw-r--r--doc/CNN_V2.md11
1 files changed, 5 insertions, 6 deletions
diff --git a/doc/CNN_V2.md b/doc/CNN_V2.md
index fa00b32..2d1d4c4 100644
--- a/doc/CNN_V2.md
+++ b/doc/CNN_V2.md
@@ -20,14 +20,13 @@ CNN v2 extends the original CNN post-processing effect with parametric static fe
- Binary weight format v2 for runtime loading
- Sigmoid activation for layer 0 and final layer (smooth [0,1] mapping)
-**Status:** ✅ Complete. Training pipeline functional, validation tools ready, mip-level support integrated.
+**Status:** ✅ Complete. Sigmoid activation, stable training, validation tools operational.
-**Known Issues:**
-- ⚠️ **Old checkpoints incompatible** - Models trained with `clamp()` activation won't work correctly with new `sigmoid()` implementation. Retrain from scratch with latest code.
+**Breaking Change:**
+- Models trained with `clamp()` incompatible. Retrain required.
**TODO:**
- 8-bit quantization with QAT for 2× size reduction (~1.6 KB)
-- Debug cnn_test vs HTML tool output difference
---
@@ -339,7 +338,7 @@ class CNNv2(nn.Module):
# Layer 0: input RGBD (4D) + static (8D) = 12D
x = torch.cat([input_rgbd, static_features], dim=1)
x = self.layers[0](x)
- x = torch.clamp(x, 0, 1) # Output layer 0 (4 channels)
+ x = torch.sigmoid(x) # Soft [0,1] for layer 0
# Layer 1+: previous output (4D) + static (8D) = 12D
for i in range(1, len(self.layers)):
@@ -348,7 +347,7 @@ class CNNv2(nn.Module):
if i < len(self.layers) - 1:
x = F.relu(x)
else:
- x = torch.clamp(x, 0, 1) # Final output [0,1]
+ x = torch.sigmoid(x) # Soft [0,1] for final layer
return x # RGBA output
```