summaryrefslogtreecommitdiff
path: root/doc/CNN_EFFECT.md
diff options
context:
space:
mode:
Diffstat (limited to 'doc/CNN_EFFECT.md')
-rw-r--r--doc/CNN_EFFECT.md85
1 files changed, 57 insertions, 28 deletions
diff --git a/doc/CNN_EFFECT.md b/doc/CNN_EFFECT.md
index ae0f38a..4659fd3 100644
--- a/doc/CNN_EFFECT.md
+++ b/doc/CNN_EFFECT.md
@@ -21,27 +21,46 @@ Trainable convolutional neural network layers for artistic stylization (painterl
## Architecture
-### Coordinate-Aware Layer 0
+### RGBD → Grayscale Pipeline
-Layer 0 accepts normalized (x,y) patch center coordinates alongside RGBA samples:
+**Input:** RGBD (RGB + inverse depth D=1/z)
+**Output:** Grayscale (1 channel)
+**Layer Input:** 7 channels = [RGBD, UV coords, grayscale] all normalized to [-1,1]
+
+**Architecture:**
+- **Inner layers (0..N-2):** Conv2d(7→4) - output RGBD
+- **Final layer (N-1):** Conv2d(7→1) - output grayscale
```wgsl
-fn cnn_conv3x3_with_coord(
+// Inner layers: 7→4 (RGBD output)
+fn cnn_conv3x3_7to4(
tex: texture_2d<f32>,
samp: sampler,
- uv: vec2<f32>, # Center position [0,1]
+ uv: vec2<f32>,
resolution: vec2<f32>,
- rgba_weights: array<mat4x4<f32>, 9>, # 9 samples × 4×4 matrix
- coord_weights: mat2x4<f32>, # 2 coords → 4 outputs
- bias: vec4<f32>
+ original: vec4<f32>, # Original RGBD [-1,1]
+ weights: array<array<f32, 8>, 36> # 9 pos × 4 out × (7 weights + bias)
) -> vec4<f32>
-```
-**Input structure:** 9 RGBA samples (36 values) + 1 xy coordinate (2 values) = 38 inputs → 4 outputs
+// Final layer: 7→1 (grayscale output)
+fn cnn_conv3x3_7to1(
+ tex: texture_2d<f32>,
+ samp: sampler,
+ uv: vec2<f32>,
+ resolution: vec2<f32>,
+ original: vec4<f32>,
+ weights: array<array<f32, 8>, 9> # 9 pos × (7 weights + bias)
+) -> f32
+```
-**Size impact:** +32B coord weights, kernel-agnostic
+**Input normalization:**
+- **fs_main** normalizes textures once: `(tex - 0.5) * 2` → [-1,1]
+- **Conv functions** normalize UV coords: `(uv - 0.5) * 2` → [-1,1]
+- **Grayscale** computed from normalized RGBD: `0.2126*R + 0.7152*G + 0.0722*B`
+- **Inter-layer data** stays in [-1,1] (no denormalization)
+- **Final output** denormalized for display: `(result + 1.0) * 0.5` → [0,1]
-**Use cases:** Position-dependent stylization (vignettes, corner darkening, radial gradients)
+**Activation:** tanh for inner layers (output stays [-1,1]), none for final layer
### Multi-Layer Architecture
@@ -80,18 +99,15 @@ workspaces/main/shaders/cnn/
### 1. Prepare Training Data
Collect input/target image pairs:
-- **Input:** Raw 3D render
-- **Target:** Artistic style (hand-painted, filtered, stylized)
+- **Input:** RGBA (RGB + depth as alpha channel, D=1/z)
+- **Target:** Grayscale stylized output
```bash
-training/input/img_000.png # Raw render
-training/output/img_000.png # Stylized target
+training/input/img_000.png # RGBA render (RGB + depth)
+training/output/img_000.png # Grayscale target
```
-Use `image_style_processor.py` to generate targets:
-```bash
-python3 training/image_style_processor.py input/ output/ pencil_sketch
-```
+**Note:** Input images must be RGBA where alpha = inverse depth (1/z)
### 2. Train Network
@@ -135,6 +151,14 @@ python3 training/train_cnn.py \
--output workspaces/main/shaders/cnn/cnn_weights_generated.wgsl
```
+**Generate ground truth (for shader validation):**
+```bash
+python3 training/train_cnn.py \
+ --infer training/input/img_000.png \
+ --export-only training/checkpoints/checkpoint_epoch_200.pth \
+ --output training/ground_truth.png
+```
+
### 3. Rebuild Demo
Training script auto-generates both `cnn_weights_generated.wgsl` and `cnn_layer.wgsl`:
@@ -245,20 +269,25 @@ Expands to:
**Weight Storage:**
-**Layer 0 (coordinate-aware):**
+**Inner layers (7→4 RGBD output):**
```wgsl
-const rgba_weights_layer0: array<mat4x4<f32>, 9> = array(...);
-const coord_weights_layer0 = mat2x4<f32>(
- 0.1, -0.2, 0.0, 0.0, # x-coord weights
- -0.1, 0.0, 0.2, 0.0 # y-coord weights
+// Structure: array<array<f32, 8>, 36>
+// 9 positions × 4 output channels, each with 7 weights + bias
+const weights_layer0: array<array<f32, 8>, 36> = array(
+ array<f32, 8>(w0_r, w0_g, w0_b, w0_d, w0_u, w0_v, w0_gray, bias0), // pos0_ch0
+ array<f32, 8>(w1_r, w1_g, w1_b, w1_d, w1_u, w1_v, w1_gray, bias1), // pos0_ch1
+ // ... 34 more entries
);
-const bias_layer0 = vec4<f32>(0.0, 0.0, 0.0, 0.0);
```
-**Layers 1+ (standard):**
+**Final layer (7→1 grayscale output):**
```wgsl
-const weights_layer1: array<mat4x4<f32>, 9> = array(...);
-const bias_layer1 = vec4<f32>(0.0, 0.0, 0.0, 0.0);
+// Structure: array<array<f32, 8>, 9>
+// 9 positions, each with 7 weights + bias
+const weights_layerN: array<array<f32, 8>, 9> = array(
+ array<f32, 8>(w0_r, w0_g, w0_b, w0_d, w0_u, w0_v, w0_gray, bias0), // pos0
+ // ... 8 more entries
+);
```
---