From 2adcf1bac1ec651861930eb2af00641eb23f6ef1 Mon Sep 17 00:00:00 2001 From: skal Date: Tue, 10 Feb 2026 22:54:38 +0100 Subject: docs: Update CNN training documentation with patch extraction Streamlined and updated all training docs with new patch-based approach. Changes: - HOWTO.md: Updated training section with patch/full-image examples - CNN_EFFECT.md: Streamlined training workflow, added detector info - training/README.md: Complete rewrite with detector comparison table New sections: - Detector comparison (harris, fast, shi-tomasi, gradient) - Practical examples for different use cases - Tips for patch size and batch size selection - Benefits of patch-based training Co-Authored-By: Claude Sonnet 4.5 --- doc/CNN_EFFECT.md | 71 ++++++++++++++++++++----------------------------------- 1 file changed, 26 insertions(+), 45 deletions(-) (limited to 'doc/CNN_EFFECT.md') diff --git a/doc/CNN_EFFECT.md b/doc/CNN_EFFECT.md index 22cf985..06065b1 100644 --- a/doc/CNN_EFFECT.md +++ b/doc/CNN_EFFECT.md @@ -98,73 +98,54 @@ workspaces/main/shaders/cnn/ ### 1. Prepare Training Data -Collect input/target image pairs: -- **Input:** RGBA (RGB + depth as alpha channel, D=1/z) -- **Target:** Grayscale stylized output - -```bash -training/input/img_000.png # RGBA render (RGB + depth) +Input/target image pairs: +``` +training/input/img_000.png # RGBA (RGB + alpha) training/output/img_000.png # Grayscale target ``` -**Note:** Input images must be RGBA where alpha = inverse depth (1/z) +**Note:** Alpha channel can be depth (1/z) or constant (255). Network learns from RGB primarily. ### 2. Train Network +**Patch-based (Recommended)** - Preserves natural pixel scale: ```bash python3 training/train_cnn.py \ - --input training/input \ - --target training/output \ - --layers 1 \ - --kernel-sizes 3 \ - --epochs 500 \ - --checkpoint-every 50 + --input training/input --target training/output \ + --patch-size 32 --patches-per-image 64 --detector harris \ + --layers 3 --kernel-sizes 3,5,3 \ + --epochs 5000 --batch-size 16 --checkpoint-every 1000 ``` -**Multi-layer example (3 layers with varying kernel sizes):** +**Detectors:** `harris` (corners), `fast` (features), `shi-tomasi` (corners), `gradient` (edges) + +**Full-image (Legacy)** - Resizes to 256×256: ```bash python3 training/train_cnn.py \ - --input training/input \ - --target training/output \ - --layers 3 \ - --kernel-sizes 3,5,3 \ - --epochs 1000 \ - --checkpoint-every 100 + --input training/input --target training/output \ + --layers 3 --kernel-sizes 3,5,3 \ + --epochs 10000 --batch-size 8 --checkpoint-every 1000 ``` -**Note:** Training script auto-generates: -- `cnn_weights_generated.wgsl` - weight arrays for all layers -- `cnn_layer.wgsl` - shader with layer switches and original input binding +**Auto-generates:** +- `cnn_weights_generated.wgsl` - Weight arrays +- `cnn_layer.wgsl` - Layer shader -**Resume from checkpoint:** -```bash -python3 training/train_cnn.py \ - --input training/input \ - --target training/output \ - --resume training/checkpoints/checkpoint_epoch_200.pth -``` +### 3. Export & Validate -**Export WGSL from checkpoint (no training):** ```bash -python3 training/train_cnn.py \ - --export-only training/checkpoints/checkpoint_epoch_200.pth \ - --output workspaces/main/shaders/cnn/cnn_weights_generated.wgsl -``` +# Export shaders +./training/train_cnn.py --export-only checkpoints/checkpoint_epoch_5000.pth -**Generate ground truth (for shader validation):** -```bash -python3 training/train_cnn.py \ - --infer training/input/img_000.png \ - --export-only training/checkpoints/checkpoint_epoch_200.pth \ - --output training/ground_truth.png +# Generate ground truth +./training/train_cnn.py --infer input.png \ + --export-only checkpoints/checkpoint_epoch_5000.pth --output ground_truth.png ``` -### 3. Rebuild Demo +### 4. Rebuild Demo -Training script auto-generates both `cnn_weights_generated.wgsl` and `cnn_layer.wgsl`: ```bash -cmake --build build -j4 -./build/demo64k +cmake --build build -j4 && ./build/demo64k ``` --- -- cgit v1.2.3