summaryrefslogtreecommitdiff
path: root/training/train_cnn_v2.py
AgeCommit message (Collapse)Author
32 hoursTODO: Add random sampling to patch-based trainingskal
Added note for future enhancement: mix salient + random samples. Rationale: - Salient point detection focuses on edges/corners - Random samples improve generalization across entire image - Prevents overfitting to only high-gradient regions Proposed implementation: - Default: 90% salient points, 10% random samples - Configurable: --random-sample-percent parameter - Example: 64 patches = 58 salient + 6 random Location: train_cnn_v2.py - TODO in _detect_salient_points() method - TODO in argument parser Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
32 hoursCNN v2: Patch-based training as default (like CNN v1)skal
Salient point detection on original images with patch extraction. Changes: - Added PatchDataset class (harris/fast/shi-tomasi/gradient detectors) - Detects salient points on ORIGINAL images (no resize) - Extracts 32×32 patches around salient points - Default: 64 patches/image, harris detector - Batch size: 16 (512 patches per batch) Training modes: 1. Patch-based (default): --patch-size 32 --patches-per-image 64 --detector harris 2. Full-image (option): --full-image --image-size 256 Benefits: - Focuses training on interesting regions - Handles variable image sizes naturally - Matches CNN v1 workflow - Better convergence with limited data (8 images → 512 patches) Script updated: - train_cnn_v2_full.sh: Patch-based by default - Configuration exposed for easy switching Example: ./scripts/train_cnn_v2_full.sh # Patch-based # Edit script: uncomment FULL_IMAGE for resize mode Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
32 hoursFix: CNN v2 training - handle variable image sizesskal
Training script now resizes all images to fixed size before batching. Issue: RuntimeError when batching variable-sized images - Images had different dimensions (376x626 vs 344x361) - PyTorch DataLoader requires uniform tensor sizes for batching Solution: - Add --image-size parameter (default: 256) - Resize all images to target_size using LANCZOS interpolation - Preserves aspect ratio independent training Changes: - train_cnn_v2.py: ImagePairDataset now resizes to fixed dimensions - train_cnn_v2_full.sh: Added IMAGE_SIZE=256 configuration Tested: 8 image pairs, variable sizes → uniform 256×256 batches Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
32 hoursCNN v2: parametric static features - Phases 1-4skal
Infrastructure for enhanced CNN post-processing with 7D feature input. Phase 1: Shaders - Static features compute (RGBD + UV + sin10_x + bias → 8×f16) - Layer template (convolution skeleton, packing/unpacking) - 3 mip level support for multi-scale features Phase 2: C++ Effect - CNNv2Effect class (multi-pass architecture) - Texture management (static features, layer buffers) - Build integration (CMakeLists, assets, tests) Phase 3: Training Pipeline - train_cnn_v2.py: PyTorch model with static feature concatenation - export_cnn_v2_shader.py: f32→f16 quantization, WGSL generation - Configurable architecture (kernels, channels) Phase 4: Validation - validate_cnn_v2.sh: End-to-end pipeline - Checkpoint → shaders → build → test images Tests: 36/36 passing Next: Complete render pipeline implementation (bind groups, multi-pass) Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>