From 7eb38fb10c7bea8d07889d2563fbc076307f8050 Mon Sep 17 00:00:00 2001 From: skal Date: Mon, 16 Feb 2026 17:25:57 +0100 Subject: docs: streamline and consolidate markdown documentation Remove 530 lines of redundant content, archive dated docs, compact CNN training sections, fix inconsistencies (effect count, test status). Improves maintainability and reduces context load for AI agents. Co-Authored-By: Claude Sonnet 4.5 --- doc/HOWTO.md | 133 +++++++---------------------------------------------------- 1 file changed, 16 insertions(+), 117 deletions(-) (limited to 'doc/HOWTO.md') diff --git a/doc/HOWTO.md b/doc/HOWTO.md index 4cafaa2..f1401df 100644 --- a/doc/HOWTO.md +++ b/doc/HOWTO.md @@ -96,147 +96,46 @@ make run_util_tests # Utility tests ## Training -### Patch-Based (Recommended) -Extracts patches at salient points, trains on center pixels only (matches WGSL sliding window): +### CNN v1 (Legacy) ```bash -# Train with 32×32 patches at detected corners/edges +# Patch-based (recommended) ./cnn_v1/training/train_cnn.py \ --input training/input/ --target training/output/ \ --patch-size 32 --patches-per-image 64 --detector harris \ - --layers 3 --kernel_sizes 3,5,3 --epochs 5000 --batch_size 16 \ - --checkpoint-every 1000 -``` - -**Training behavior:** -- Loss computed only on center pixels (excludes conv padding borders) -- For 3-layer network: excludes 3px border on each side -- Matches GPU shader sliding-window paradigm - -**Detectors:** `harris` (default), `fast`, `shi-tomasi`, `gradient` - -### Full-Image -Processes entire image with sliding window (matches WGSL): -```bash -./cnn_v1/training/train_cnn.py \ - --input training/input/ --target training/output/ \ - --layers 3 --kernel_sizes 3,5,3 --epochs 10000 --batch_size 8 \ - --checkpoint-every 1000 -``` + --layers 3 --kernel_sizes 3,5,3 --epochs 5000 -### Export & Validation -```bash -# Generate shaders from checkpoint -./cnn_v1/training/train_cnn.py --export-only checkpoints/checkpoint_epoch_5000.pth - -# Generate ground truth (sliding window, no tiling) -./cnn_v1/training/train_cnn.py --infer input.png \ - --export-only checkpoints/checkpoint_epoch_5000.pth \ - --output ground_truth.png +# Export shaders +./cnn_v1/training/train_cnn.py --export-only checkpoints/checkpoint.pth ``` -**Inference:** Processes full image with sliding window (each pixel from NxN neighborhood). No tiling artifacts. - -**Kernel sizes:** 3×3 (36 weights), 5×5 (100 weights), 7×7 (196 weights) - ### CNN v2 Training -Enhanced CNN with parametric static features (7D input: RGBD + UV + sin encoding + bias). - -**Complete Pipeline** (recommended): ```bash -# Train → Export → Build → Validate (default config) +# Default pipeline (train → export → validate) ./cnn_v2/scripts/train_cnn_v2_full.sh -# Rapid debug (1 layer, 3×3, 5 epochs) -./cnn_v2/scripts/train_cnn_v2_full.sh --num-layers 1 --kernel-sizes 3 --epochs 5 --output-weights test.bin - -# Custom training parameters -./cnn_v2/scripts/train_cnn_v2_full.sh --epochs 500 --batch-size 32 --checkpoint-every 100 +# Quick debug (1 layer, 5 epochs) +./cnn_v2/scripts/train_cnn_v2_full.sh --num-layers 1 --epochs 5 # Custom architecture -./cnn_v2/scripts/train_cnn_v2_full.sh --kernel-sizes 3,5,3 --num-layers 3 --mip-level 1 - -# Custom output path -./cnn_v2/scripts/train_cnn_v2_full.sh --output-weights workspaces/test/cnn_weights.bin - -# Grayscale loss (compute loss on luminance instead of RGBA) -./cnn_v2/scripts/train_cnn_v2_full.sh --grayscale-loss +./cnn_v2/scripts/train_cnn_v2_full.sh --kernel-sizes 3,5,3 --epochs 500 -# Custom directories -./cnn_v2/scripts/train_cnn_v2_full.sh --input training/input --target training/target_2 - -# Full-image mode (instead of patch-based) -./cnn_v2/scripts/train_cnn_v2_full.sh --full-image --image-size 256 +# Validation only +./cnn_v2/scripts/train_cnn_v2_full.sh --validate -# See all options +# All options ./cnn_v2/scripts/train_cnn_v2_full.sh --help ``` -**Defaults:** 200 epochs, 3×3 kernels, 8→4→4 channels, batch-size 16, patch-based (8×8, harris detector). -- Live progress with single-line update -- Always saves final checkpoint (regardless of --checkpoint-every interval) -- When multiple kernel sizes provided (e.g., 3,5,3), num_layers derived from list length -- Validates all input images on final epoch -- Exports binary weights (storage buffer architecture) -- Streamlined output: single-line export summary, compact validation -- All parameters configurable via command-line - -**Validation Only** (skip training): -```bash -# Use latest checkpoint -./cnn_v2/scripts/train_cnn_v2_full.sh --validate - -# Use specific checkpoint -./cnn_v2/scripts/train_cnn_v2_full.sh --validate checkpoints/checkpoint_epoch_50.pth -``` +**Defaults:** 200 epochs, 3×3 kernels, 8→4→4 channels, patch-based (8×8). Outputs ~3.2 KB f16 weights. -**Manual Training:** +**Manual export:** ```bash -# Default config -./cnn_v2/training/train_cnn_v2.py \ - --input training/input/ --target training/target_2/ \ - --epochs 100 --batch-size 16 --checkpoint-every 5 - -# Custom architecture (per-layer kernel sizes) -./cnn_v2/training/train_cnn_v2.py \ - --input training/input/ --target training/target_2/ \ - --kernel-sizes 1,3,5 \ - --epochs 5000 --batch-size 16 - -# Mip-level for p0-p3 features (0=original, 1=half, 2=quarter, 3=eighth) -./cnn_v2/training/train_cnn_v2.py \ - --input training/input/ --target training/target_2/ \ - --mip-level 1 \ - --epochs 100 --batch-size 16 - -# Grayscale loss (compute loss on luminance Y = 0.299*R + 0.587*G + 0.114*B) -./cnn_v2/training/train_cnn_v2.py \ - --input training/input/ --target training/target_2/ \ - --grayscale-loss \ - --epochs 100 --batch-size 16 -``` - -**Export Binary Weights:** -```bash -# Verbose output (shows all layer details) -./training/export_cnn_v2_weights.py checkpoints/checkpoint_epoch_100.pth \ +./training/export_cnn_v2_weights.py checkpoints/checkpoint.pth \ --output-weights workspaces/main/cnn_v2_weights.bin - -# Quiet mode (single-line summary) -./training/export_cnn_v2_weights.py checkpoints/checkpoint_epoch_100.pth \ - --output-weights workspaces/main/cnn_v2_weights.bin \ - --quiet -``` - -Generates binary format: header + layer info + f16 weights (~3.2 KB for 3-layer model). -Storage buffer architecture allows dynamic layer count. -Use `--quiet` for streamlined output in scripts (used automatically by train_cnn_v2_full.sh). - -**TODO:** 8-bit quantization for 2× size reduction (~1.6 KB). Requires quantization-aware training (QAT). - ``` -**Validation:** Use HTML tool (`cnn_v2/tools/cnn_v2_test/index.html`) for CNN v2 validation. See `cnn_v2/docs/CNN_V2_WEB_TOOL.md`. +See `cnn_v2/docs/CNN_V2.md` for architecture details and web validation tool. --- -- cgit v1.2.3