diff options
| author | skal <pascal.massimino@gmail.com> | 2026-02-10 08:01:25 +0100 |
|---|---|---|
| committer | skal <pascal.massimino@gmail.com> | 2026-02-10 08:01:25 +0100 |
| commit | 47397444b30b0f461b1633297a68300179586fda (patch) | |
| tree | b84a59b6a6595b609fe71980e81b99cc1b180693 /workspaces/main/shaders/cnn/cnn_activation.wgsl | |
| parent | c51c146da9590845b864cbba3a7317c5b5bed56a (diff) | |
feat: Add CNN post-processing effect with modular WGSL architecture
Implements multi-layer convolutional neural network shader for stylized
post-processing of 3D rendered scenes:
**Core Components:**
- CNNEffect: C++ effect class with single-layer rendering (expandable to multi-pass)
- Modular WGSL snippets: cnn_activation, cnn_conv3x3/5x5/7x7, cnn_weights_generated
- Placeholder identity-like weights for initial testing (to be replaced by trained weights)
**Architecture:**
- Flexible kernel sizes (3×3, 5×5, 7×7) via separate snippet files
- ShaderComposer integration (#include resolution)
- Residual connections (input + processed output)
- Supports parallel convolutions (design ready, single conv implemented)
**Size Impact:**
- ~3-4 KB shader code (snippets + main shader)
- ~2-4 KB weights (depends on network architecture when trained)
- Total: ~5-8 KB (acceptable for 64k demo)
**Testing:**
- CNNEffect added to test_demo_effects.cc
- 36/36 tests passing (100%)
**Next Steps:**
- Training script (scripts/train_cnn.py) to generate real weights
- Multi-layer rendering with ping-pong textures
- Weight quantization for size optimization
handoff(Claude): CNN effect foundation complete, ready for training integration
Diffstat (limited to 'workspaces/main/shaders/cnn/cnn_activation.wgsl')
| -rw-r--r-- | workspaces/main/shaders/cnn/cnn_activation.wgsl | 18 |
1 files changed, 18 insertions, 0 deletions
diff --git a/workspaces/main/shaders/cnn/cnn_activation.wgsl b/workspaces/main/shaders/cnn/cnn_activation.wgsl new file mode 100644 index 0000000..4fe771e --- /dev/null +++ b/workspaces/main/shaders/cnn/cnn_activation.wgsl @@ -0,0 +1,18 @@ +// CNN activation functions +// 4 functions: tanh, ReLU, sigmoid, leaky_relu + +fn cnn_tanh(x: vec4<f32>) -> vec4<f32> { + return tanh(x); +} + +fn cnn_relu(x: vec4<f32>) -> vec4<f32> { + return max(vec4<f32>(0.0), x); +} + +fn cnn_sigmoid(x: vec4<f32>) -> vec4<f32> { + return 1.0 / (1.0 + exp(-x)); +} + +fn cnn_leaky_relu(x: vec4<f32>, alpha: f32) -> vec4<f32> { + return max(alpha * x, x); +} |
