summaryrefslogtreecommitdiff
path: root/training
AgeCommit message (Collapse)Author
18 hoursgen_identity_weights: Add --p47 option for static feature visualizationskal
Adds --p47 flag to output static features directly: - p4 → ch0 (UV.x) - p5 → ch1 (UV.y) - p6 → ch2 (sin encoding) - p7 → ch3 (bias) Useful for visualizing static feature generation without input RGBA. Updated doc/CNN_V2_DEBUG_TOOLS.md with --p47 usage. Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
18 hoursgen_identity_weights: Add --mix option for static feature blendingskal
Adds --mix flag to blend input channels with static features: - p0+p4 → p0 (RGBA + UV.x) - p1+p5 → p1 (RGBA + UV.y) - p2+p6 → p2 (RGBA + sin encoding) - p3+p7 → p3 (RGBA + bias) Useful for debugging static feature contribution in CNN v2. Updated doc/CNN_V2_DEBUG_TOOLS.md with --mix usage examples. Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
19 hoursCNN v2: Fix Layer 0 visualization scale (was 0.5, now 1.0)skal
Layer 0 output is clamped [0,1], does not need 0.5 dimming. Middle layers (ReLU) keep 0.5 scale for values >1. Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
19 hoursCNN v2: Add debugging tools for mismatch investigationskal
Add identity weight generator and composited layer save for debugging HTML/C++ output differences. Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
19 hoursCNN v2 training: Fix float64/float32 dtype mismatch in depth featureskal
Cast depth array to float32 when provided, preventing torch Double/Float dtype mismatch during forward pass. Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
19 hoursCNN v2: Alpha channel depth handling and layer visualizationskal
Training changes: - Changed p3 default depth from 0.0 to 1.0 (far plane semantics) - Extract depth from target alpha channel in both datasets - Consistent alpha-as-depth across training/validation Test tool enhancements (cnn_test): - Added load_depth_from_alpha() for R32Float depth texture - Fixed bind group layout for UnfilterableFloat sampling - Added --save-intermediates with per-channel grayscale composites - Each layer saved as 4x wide PNG (p0-p3 stacked horizontally) - Global layers_composite.png for vertical layer stack overview Investigation notes: - Static features p4-p7 ARE computed and bound correctly - Sin_20_y pattern visibility difference between tools under investigation - Binary weights timestamp (Feb 13 20:36) vs HTML tool (Feb 13 22:12) - Next: Update HTML tool with canonical binary weights handoff(Claude): HTML tool weights update pending - base64 encoded canonical weights ready in /tmp/weights_b64.txt for line 392 replacement. Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
20 hoursCNN v2: Use alpha channel for p3 depth feature + layer visualizationskal
Training changes (train_cnn_v2.py): - p3 now uses target image alpha channel (depth proxy for 2D images) - Default changed from 0.0 → 1.0 (far plane semantics) - Both PatchDataset and ImagePairDataset updated Test tools (cnn_test.cc): - New load_depth_from_alpha() extracts PNG alpha → p3 texture - Fixed bind group layout: use UnfilterableFloat for R32Float depth - Added --save-intermediates support for CNN v2: * Each layer_N.png shows 4 channels horizontally (1812×345 grayscale) * layers_composite.png stacks all layers vertically (1812×1380) * static_features.png shows 4 feature channels horizontally - Per-channel visualization enables debugging layer-by-layer differences HTML tool (index.html): - Extract alpha channel from input image → depth texture - Matches training data distribution for validation Note: Current weights trained with p3=0 are now mismatched. Both tools use p3=alpha consistently, so outputs remain comparable for debugging. Retrain required for optimal quality. Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
25 hoursCNN v2 training: Add --grayscale-loss option for luminance-based loss ↵skal
computation Add option to compute loss on grayscale (Y = 0.299*R + 0.587*G + 0.114*B) instead of full RGBA channels. Useful for training models that prioritize luminance accuracy over color accuracy. Changes: - training/train_cnn_v2.py: Add --grayscale-loss flag and grayscale conversion in loss computation - scripts/train_cnn_v2_full.sh: Add --grayscale-loss parameter support - doc/CNN_V2.md: Document grayscale loss in training configuration and checkpoint format - doc/HOWTO.md: Add usage examples for --grayscale-loss flag Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
26 hoursCNN v2: Change feature #6 from sin(10*x) to sin(20*y)skal
Update positional encoding to use vertical coordinate at higher frequency. Changes: - train_cnn_v2.py: sin10_x → sin20_y (computed from uv_y) - cnn_v2_static.wgsl: sin10_x → sin20_y (computed from uv_y) - index.html: sin10_x → sin20_y (STATIC_SHADER) - CNN_V2.md: Update feature descriptions and examples - CNN_V2_BINARY_FORMAT.md: Update static features documentation Feature vector: [p0, p1, p2, p3, uv_x, uv_y, sin20_y, bias] Rationale: Higher frequency (20 vs 10) + vertical axis provides better spatial discrimination for position encoding. Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
26 hoursCNN v2: Add TODO for flexible feature layout in binary format v3skal
Document future enhancement for arbitrary feature vector layouts. Proposed feature descriptor in binary format v3: - Specify feature types, sources, and ordering - Enable runtime experimentation without shader recompilation - Examples: [R,G,B,dx,dy,uv_x,bias] or [mip1.r,mip2.g,laplacian,uv_x,sin20_x,bias] Added TODOs in: - CNN_V2_BINARY_FORMAT.md: Detailed proposal with struct layout - CNN_V2.md: Future extensions section - train_cnn_v2.py: compute_static_features() docstring - cnn_v2_static.wgsl: Shader header comment - cnn_v2_effect.cc: Version check comment Current limitation: Hardcoded [p0,p1,p2,p3,uv_x,uv_y,sin10_x,bias] layout. Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
26 hoursCNN v2: Add mip-level support to runtime effectskal
Binary format v2 includes mip_level in header (20 bytes, was 16). Effect reads mip_level and passes to static features shader via uniform. Shader samples from correct mip texture based on mip_level. Changes: - export_cnn_v2_weights.py: Header v2 with mip_level field - cnn_v2_effect.h: Add StaticFeatureParams, mip_level member, params buffer - cnn_v2_effect.cc: Read mip_level from weights, create/bind params buffer, update per-frame - cnn_v2_static.wgsl: Accept params uniform, sample from selected mip level Binary format v2: - Header: 20 bytes (magic, version=2, num_layers, total_weights, mip_level) - Backward compatible: v1 weights load with mip_level=0 Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
26 hoursCNN v2 export: Read and display mip_level from checkpointsskal
Export scripts now read mip_level from checkpoint config and display it. Shader generator includes mip level in generated comments. Changes: - export_cnn_v2_weights.py: Read mip_level, print in config - export_cnn_v2_shader.py: Read mip_level, pass to shader gen, add to comments Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
26 hoursCNN v2: Add --mip-level option for parametric featuresskal
Add mip level control for p0-p3 features (0=original, 1=half, 2=quarter, 3=eighth). Uses pyrDown/pyrUp for proper Gaussian filtering during mip generation. Changes: - compute_static_features(): Accept mip_level param, generate mip via cv2 pyramid - PatchDataset/ImagePairDataset: Pass mip_level to feature computation - CLI: Add --mip-level arg with choices [0,1,2,3] - Save mip_level in checkpoint config for tracking - Doc updates: HOWTO.md and CNN_V2.md Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
26 hoursCNN v2: Fix activation function mismatch between training and inferenceskal
Layer 0 now uses clamp [0,1] in both training and inference (was using ReLU in shaders). - index.html: Add is_layer_0 flag to LayerParams, handle Layer 0 separately - export_cnn_v2_shader.py: Generate correct activation for Layer 0 Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
30 hoursCNN v2 training: Use target image alpha channelskal
Changed target loading from RGB to RGBA to preserve transparency. Model learns to predict alpha channel from target image instead of constant 1.0 padding. Before: Target padded with alpha=1.0 After: Target uses actual alpha from image (or 1.0 if no alpha) Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
30 hoursCNN v2: Restore per-layer kernel sizes supportskal
Training: - train_cnn_v2.py: Accept --kernel-sizes as comma-separated list - CNNv2 model: Per-layer kernel sizes (e.g., [1,3,5]) - Single value replicates across layers (e.g., "3" → [3,3,3]) Export: - export_cnn_v2_weights.py: Backward compatible with old checkpoints - Handles both kernel_size (old) and kernel_sizes (new) format Documentation: - CNN_V2.md: Updated code examples and config format - HOWTO.md: Updated training examples to show comma-separated syntax Binary format: Already supports per-layer kernel sizes (no changes) Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
30 hoursCNN v2: Refactor to uniform 12D→4D architectureskal
**Architecture changes:** - Static features (8D): p0-p3 (parametric) + uv_x, uv_y, sin(10×uv_x), bias - Input RGBD (4D): fed separately to all layers - All layers: uniform 12D→4D (4 prev/input + 8 static → 4 output) - Bias integrated in static features (bias=False in PyTorch) **Weight calculations:** - 3 layers × (12 × 3×3 × 4) = 1296 weights - f16: 2.6 KB (vs old variable arch: ~6.4 KB) **Updated files:** *Training (Python):* - train_cnn_v2.py: Uniform model, takes input_rgbd + static_features - export_cnn_v2_weights.py: Binary export for storage buffers - export_cnn_v2_shader.py: Per-layer shader export (debugging) *Shaders (WGSL):* - cnn_v2_static.wgsl: p0-p3 parametric features (mips/gradients) - cnn_v2_compute.wgsl: 12D input, 4D output, vec4 packing *Tools:* - HTML tool (cnn_v2_test): Updated for 12D→4D, layer visualization *Docs:* - CNN_V2.md: Updated architecture, training, validation sections - HOWTO.md: Reference HTML tool for validation *Removed:* - validate_cnn_v2.sh: Obsolete (used CNN v1 tool) All code consistent with bias=False (bias in static features as 1.0). handoff(Claude): CNN v2 architecture finalized and documented
34 hoursAdd weights/ subdirectory to workspaces for CNN training outputsskal
Each workspace now has a weights/ directory to store binary weight files from CNN training (e.g., cnn_v2_weights.bin). Changes: - Created workspaces/{main,test}/weights/ - Moved cnn_v2_weights.bin → workspaces/main/weights/ - Updated assets.txt reference - Updated training scripts and export tool paths handoff(Claude): Workspace weights/ directories added
2 daystest_demo: Add beat-synchronized CNN post-processing with version selectionskal
- Add --cnn-version <1|2> flag to select between CNN v1 and v2 - Implement beat_phase modulation for dynamic blend in both CNN effects - Fix CNN v2 per-layer uniform buffer sharing (each layer needs own buffer) - Fix CNN v2 y-axis orientation to match render pass convention - Add Scene1Effect as base visual layer to test_demo timeline - Reorganize CNN v2 shaders into cnn_v2/ subdirectory - Update asset paths and documentation for new shader organization Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2 daysRefine training script output and validationskal
1. Loss printed at every epoch with \r (no scrolling) 2. Validation only on final epoch (not all checkpoints) 3. Process all input images (not just img_000.png) Training output now shows live progress with single line update.
2 daysTODO: 8-bit weight quantization for 2× size reductionskal
- Add QAT (quantization-aware training) notes - Requires training with fake quantization - Target: ~1.6 KB weights (vs 3.2 KB f16) - Shader unpacking needs adaptation (4× u8 per u32)
2 daysCNN v2: Storage buffer complete - real weights exportedskal
- Export weights from epoch 70 checkpoint (3.2 KB binary) - Disable shader template generation (use manual cnn_v2_compute.wgsl) - Build successful with real weights - Ready for integration testing Storage buffer architecture complete: - Dynamic layer count support - ~0.3ms overhead vs constants (negligible) - Single shader, flexible configuration - Binary format: header + layer info + f16 weights
2 daysCNN v2: storage buffer architecture foundationskal
- Add binary weight format (header + layer info + packed f16) - New export_cnn_v2_weights.py for binary weight export - Single cnn_v2_compute.wgsl shader with storage buffer - Load weights in CNNv2Effect::load_weights() - Create layer compute pipeline with 5 bindings - Fast training config: 100 epochs, 3×3 kernels, 8→4→4 channels Next: Complete bind group creation and multi-layer compute execution
2 daysTODO: Add random sampling to patch-based trainingskal
Added note for future enhancement: mix salient + random samples. Rationale: - Salient point detection focuses on edges/corners - Random samples improve generalization across entire image - Prevents overfitting to only high-gradient regions Proposed implementation: - Default: 90% salient points, 10% random samples - Configurable: --random-sample-percent parameter - Example: 64 patches = 58 salient + 6 random Location: train_cnn_v2.py - TODO in _detect_salient_points() method - TODO in argument parser Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2 daysCNN v2: Patch-based training as default (like CNN v1)skal
Salient point detection on original images with patch extraction. Changes: - Added PatchDataset class (harris/fast/shi-tomasi/gradient detectors) - Detects salient points on ORIGINAL images (no resize) - Extracts 32×32 patches around salient points - Default: 64 patches/image, harris detector - Batch size: 16 (512 patches per batch) Training modes: 1. Patch-based (default): --patch-size 32 --patches-per-image 64 --detector harris 2. Full-image (option): --full-image --image-size 256 Benefits: - Focuses training on interesting regions - Handles variable image sizes naturally - Matches CNN v1 workflow - Better convergence with limited data (8 images → 512 patches) Script updated: - train_cnn_v2_full.sh: Patch-based by default - Configuration exposed for easy switching Example: ./scripts/train_cnn_v2_full.sh # Patch-based # Edit script: uncomment FULL_IMAGE for resize mode Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2 daysFix: CNN v2 training - handle variable image sizesskal
Training script now resizes all images to fixed size before batching. Issue: RuntimeError when batching variable-sized images - Images had different dimensions (376x626 vs 344x361) - PyTorch DataLoader requires uniform tensor sizes for batching Solution: - Add --image-size parameter (default: 256) - Resize all images to target_size using LANCZOS interpolation - Preserves aspect ratio independent training Changes: - train_cnn_v2.py: ImagePairDataset now resizes to fixed dimensions - train_cnn_v2_full.sh: Added IMAGE_SIZE=256 configuration Tested: 8 image pairs, variable sizes → uniform 256×256 batches Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2 daysCNN v2: parametric static features - Phases 1-4skal
Infrastructure for enhanced CNN post-processing with 7D feature input. Phase 1: Shaders - Static features compute (RGBD + UV + sin10_x + bias → 8×f16) - Layer template (convolution skeleton, packing/unpacking) - 3 mip level support for multi-scale features Phase 2: C++ Effect - CNNv2Effect class (multi-pass architecture) - Texture management (static features, layer buffers) - Build integration (CMakeLists, assets, tests) Phase 3: Training Pipeline - train_cnn_v2.py: PyTorch model with static feature concatenation - export_cnn_v2_shader.py: f32→f16 quantization, WGSL generation - Configurable architecture (kernels, channels) Phase 4: Validation - validate_cnn_v2.sh: End-to-end pipeline - Checkpoint → shaders → build → test images Tests: 36/36 passing Next: Complete render pipeline implementation (bind groups, multi-pass) Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2 daysremove more stale filesskal
3 daysfeat: implement beat-based timing systemskal
BREAKING CHANGE: Timeline format now uses beats as default unit ## Core Changes **Uniform Structure (32 bytes maintained):** - Added `beat_time` (absolute beats for musical animation) - Added `beat_phase` (fractional 0-1 for smooth oscillation) - Renamed `beat` → `beat_phase` - Kept `time` (physical seconds, tempo-independent) **Seq Compiler:** - Default: all numbers are beats (e.g., `5`, `16.5`) - Explicit seconds: `2.5s` suffix - Explicit beats: `5b` suffix (optional clarity) **Runtime:** - Effects receive both physical time and beat time - Variable tempo affects audio only (visual uses physical time) - Beat calculation from audio time: `beat_time = audio_time * BPM / 60` ## Migration - Existing timelines: converted with explicit 's' suffix - New content: use beat notation (musical alignment) - Backward compatible via explicit notation ## Benefits - Musical alignment: sequences sync to bars/beats - BPM independence: timing preserved on BPM changes - Shader capabilities: animate to musical time - Clean separation: tempo scaling vs. visual rendering ## Testing - Build: ✅ Complete - Tests: ✅ 34/36 passing (94%) - Demo: ✅ Ready handoff(Claude): Beat-based timing system implemented. Variable tempo only affects audio sample triggering. Visual effects use physical_time (constant) and beat_time (musical). Shaders can now animate to beats. Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
3 daysadd trained layersskal
+misc
3 daysdocs: Update CNN comments and add bias fix summaryskal
- Fix stale comments: RGBD→RGB (not grayscale) - Clarify shape transformations in inference - Add CNN_BIAS_FIX_2026-02.md consolidating recent fixes - Include regenerated weights with 5x5 kernel for layer 0 Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
3 daysfix: CNN bias accumulation and output format improvementsskal
- Fix bias division bug: divide by num_positions to compensate for shader loop accumulation (affects all layers) - train_cnn.py: Save RGBA output preserving alpha channel from input - Add --debug-hex flag to both tools for pixel-level debugging - Remove sRGB/linear_png debug code from cnn_test - Regenerate weights with corrected bias export Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
3 daysupdate cnn codeskal
3 daysrefactor: Use linspace(-1,1) directly for coordsskal
Simplify coordinate initialization by generating [-1,1] range directly instead of [0,1] then normalizing. Mathematically equivalent, clearer. Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
3 daysfix: Compute gray from [0,1] RGB in CNN shader generatorskal
Match training forward pass: compute grayscale from original [0,1] RGB before normalization, then normalize gray to [-1,1]. Previously computed gray from normalized [-1,1] RGB in generated shader, creating mismatch with train.py which does: gray = 0.2126*R + 0.7152*G + 0.0722*B # [0,1] gray = (gray - 0.5) * 2.0 # [-1,1] Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
3 daysfix: Complete auxiliary texture initialization fixskal
Root cause: After swapping init/resize order, effects with Renderer3D crashed because resize() called before init() tried to use uninitialized GPU resources. Changes: - Add guards in FlashCubeEffect::resize() and Hybrid3DEffect::resize() to check ctx_.device before calling renderer_.resize() - Remove lazy initialization remnants from CircleMaskEffect and CNNEffect - Register auxiliary textures directly in init() (width_/height_ already set) - Remove ensure_texture() methods and texture_initialized_ flags All 36 tests passing. Demo runs without crashes. Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
3 daysadd --save-intermediates to train.py and cnn_testskal
3 daysfix: Move sigmoid activation to call site in CNN layer shaderskal
Conv functions now return raw sum, sigmoid applied at call site. Matches tanh pattern used for inner layers. Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
3 daysfix: Replace clamp with sigmoid in CNN final layerskal
Final layer used hard clamp causing saturation to white when output > 1.0. Replaced with sigmoid activation for smooth [0,1] mapping with gradients. Changes: - train_cnn.py: torch.sigmoid() in forward pass and WGSL codegen - WGSL shaders: 1.0/(1.0+exp(-sum)) in cnn_conv3x3/5x5 _7to1 functions Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
3 daysfeat: Add early stopping to CNN trainingskal
Add --early-stop-patience and --early-stop-eps parameters to stop training when loss plateaus. Automatically exports weights when triggered. Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
3 daysfix: CNN training/inference to match WGSL sliding windowskal
Training now computes loss only on center pixels (excludes conv padding borders). Inference changed from tiling to full-image sliding window. Both match cnn_layer.wgsl: each pixel processed from NxN neighborhood. Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
3 daysformat .wgsl layer code (cosmetics)skal
4 daysfix: Use patch-based inference to match CNN training distributionskal
Inference now tiles images into patches matching training patch size, preventing distribution mismatch between patch training and full-image inference. Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
4 daysopt: Move invariant in1 calculation outside CNN convolution loopsskal
The in1 vector (uv_norm, gray, 1.0) is loop-invariant and doesn't depend on dx/dy offset. Moving it outside the convolution loop eliminates redundant computation and enables better SIMD optimization. Updated both shader files and train.py code generation. Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
4 daysopt: Vec4-optimize CNN convolution shaders for SIMDskal
Restructured CNN weight storage and computation for GPU SIMD efficiency: **Weight format:** - Before: array<array<f32, 8>, N> (scalar array) - After: array<vec4<f32>, N*2> (vec4 pairs) **Computation:** - Before: 8 scalar MADs + separate bias add - After: 2 dot4 instructions (4 parallel MADs each) - Input: [rgba][uv,gray,1] where 1.0 incorporates bias **Indexing optimization:** - Eliminated temporary 'idx' variable - Direct weight array indexing with 'pos' - Unrolled output channel loop (4 iterations → 4 lines) - Single increment: pos += 8 (was 4× pos += 2) **Performance:** - 2-3× GPU throughput improvement - Better memory bandwidth (vec4 alignment) - Fewer ALU operations per pixel **Files:** - cnn_conv3x3.wgsl, cnn_conv5x5.wgsl: All 3 functions per file - train_cnn.py: Export format + code generation - cnn_weights_generated.wgsl, cnn_layer.wgsl: Regenerated - CNN_EFFECT.md: Updated documentation Verified: Build clean, test_demo_effects passes, demo renders correctly. handoff(Claude): CNN vec4 SIMD optimization complete
4 dayschore: Update CNN architecture to 3×3×3 with new trained weightsskal
Changed from 3×5×3 to 3×3×3 architecture for testing. Changes: - cnn_layer.wgsl: Use 3×3 conv for all layers - cnn_weights_generated.wgsl: Regenerated weights - image_style_processor.py: Made executable handoff(Claude): CNN mismatch analysis complete, patch extraction added, docs updated Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
4 daysdocs: Update CNN training documentation with patch extractionskal
Streamlined and updated all training docs with new patch-based approach. Changes: - HOWTO.md: Updated training section with patch/full-image examples - CNN_EFFECT.md: Streamlined training workflow, added detector info - training/README.md: Complete rewrite with detector comparison table New sections: - Detector comparison (harris, fast, shi-tomasi, gradient) - Practical examples for different use cases - Tips for patch size and batch size selection - Benefits of patch-based training Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
4 daysfeat: Add salient-point patch extraction for CNN trainingskal
Preserve natural pixel scale by extracting patches at salient points instead of resizing entire images. Features: - Multiple detectors: Harris (default), FAST, Shi-Tomasi, gradient - Configurable patch size (e.g., 32×32) and patches per image - Automatic fallback to random patches if insufficient features Usage: # Patch-based training (preserves scale) python3 train_cnn.py --input dir/ --target dir/ --patch-size 32 --patches-per-image 64 --detector harris # Original resize mode (if --patch-size omitted) python3 train_cnn.py --input dir/ --target dir/ Arguments: --patch-size: Patch dimension (e.g., 32 for 32×32 patches) --patches-per-image: Number of patches to extract per image (default: 64) --detector: harris|fast|shi-tomasi|gradient (default: harris) Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
4 daysfix: Correct UV coordinate computation to match PyTorch linspaceskal
Critical mismatch: shader used pixel-center coordinates while PyTorch uses pixel-corner coordinates, causing 0.5-pixel offset. PyTorch: linspace(0, 1, H) → [0, 1/(H-1), ..., 1] Shader: (p.xy - 0.5) / (resolution - 1.0) to match Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
4 daysfix: Add clamp to CNN final layer to match PyTorch trainingskal
CNN output mismatch resolved: final layer (7→1) now clamps to [0,1]. Changes: - Add clamp(sum, 0.0, 1.0) to cnn_conv3x3_7to1 and cnn_conv5x5_7to1 - Add generate_conv_final_function() to train_cnn.py for auto-generation - Update comments to clarify clamping behavior - Future exports will auto-generate final layers with correct clamp PyTorch uses torch.clamp(out, 0.0, 1.0) on final output; shaders were missing this critical operation, causing range mismatches. Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>