From 47397444b30b0f461b1633297a68300179586fda Mon Sep 17 00:00:00 2001 From: skal Date: Tue, 10 Feb 2026 08:01:25 +0100 Subject: feat: Add CNN post-processing effect with modular WGSL architecture MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Implements multi-layer convolutional neural network shader for stylized post-processing of 3D rendered scenes: **Core Components:** - CNNEffect: C++ effect class with single-layer rendering (expandable to multi-pass) - Modular WGSL snippets: cnn_activation, cnn_conv3x3/5x5/7x7, cnn_weights_generated - Placeholder identity-like weights for initial testing (to be replaced by trained weights) **Architecture:** - Flexible kernel sizes (3×3, 5×5, 7×7) via separate snippet files - ShaderComposer integration (#include resolution) - Residual connections (input + processed output) - Supports parallel convolutions (design ready, single conv implemented) **Size Impact:** - ~3-4 KB shader code (snippets + main shader) - ~2-4 KB weights (depends on network architecture when trained) - Total: ~5-8 KB (acceptable for 64k demo) **Testing:** - CNNEffect added to test_demo_effects.cc - 36/36 tests passing (100%) **Next Steps:** - Training script (scripts/train_cnn.py) to generate real weights - Multi-layer rendering with ping-pong textures - Weight quantization for size optimization handoff(Claude): CNN effect foundation complete, ready for training integration --- workspaces/main/shaders/cnn/cnn_activation.wgsl | 18 ++++++++++++++++++ 1 file changed, 18 insertions(+) create mode 100644 workspaces/main/shaders/cnn/cnn_activation.wgsl (limited to 'workspaces/main/shaders/cnn/cnn_activation.wgsl') diff --git a/workspaces/main/shaders/cnn/cnn_activation.wgsl b/workspaces/main/shaders/cnn/cnn_activation.wgsl new file mode 100644 index 0000000..4fe771e --- /dev/null +++ b/workspaces/main/shaders/cnn/cnn_activation.wgsl @@ -0,0 +1,18 @@ +// CNN activation functions +// 4 functions: tanh, ReLU, sigmoid, leaky_relu + +fn cnn_tanh(x: vec4) -> vec4 { + return tanh(x); +} + +fn cnn_relu(x: vec4) -> vec4 { + return max(vec4(0.0), x); +} + +fn cnn_sigmoid(x: vec4) -> vec4 { + return 1.0 / (1.0 + exp(-x)); +} + +fn cnn_leaky_relu(x: vec4, alpha: f32) -> vec4 { + return max(alpha * x, x); +} -- cgit v1.2.3