| Age | Commit message (Collapse) | Author |
|
- Added --infer flag for single-image inference
- Loads checkpoint, runs forward pass, saves PNG output
- Useful for verifying shader matches trained model
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
|
|
**Training changes:**
- Final layer now outputs [0,1] directly with torch.clamp()
- Removed denormalization step (was converting [-1,1] to [0,1])
- Network learns [0,1] output natively
**Shader generation fixes:**
- Layer 0 uses _src variant (5 params, normalizes [0,1] input internally)
- Removed pre-normalization of input texture (handled by _src)
- Final layer blending: gray_out already [0,1], no denormalization needed
- Added generate_conv_src_function() for all kernel sizes
- Auto-generates _src variants when exporting (skips if exists)
**Cleanup:**
- Removed obsolete 4-channel functions from cnn_conv5x5.wgsl
- Keep only 7-channel variants (_7to4, _7to1, _7to4_src)
**Normalization flow:**
[0,1] texture → _src normalizes to [-1,1] → tanh [-1,1] → ... → final conv [0,1] clipped
handoff(Claude): CNN normalization pipeline fixed and consistent with training
|
|
Normalize textures once in fs_main instead of in every conv function.
Keep all intermediate layers in [-1,1] range, denormalize only for final display.
Changes:
- train_cnn.py: Generator normalizes input once, keeps [-1,1] between layers
- cnn_conv*.wgsl: Remove texture normalization (already [-1,1])
- cnn_layer.wgsl: Regenerated with new normalization flow
- CNN_EFFECT.md: Updated documentation
Eliminates redundant [0,1]↔[-1,1] conversions, reducing shader complexity.
handoff(Claude): CNN normalization optimized, all tests passing (35/36).
|
|
Training script was hardcoded to generate cnn_conv3x3_* calls regardless
of actual kernel size, causing shader validation errors when layer 1 used
5×5 kernel (100 weights) but called 3×3 function (expected 36).
Changes:
- train_cnn.py: Generate correct conv function based on kernel_sizes[i]
- cnn_conv5x5.wgsl: Add cnn_conv5x5_7to4 and cnn_conv5x5_7to1 variants
- Regenerate cnn_layer.wgsl with correct function calls for [3,5,3]
- Document kernel size→function mapping in HOWTO.md
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
|
|
Upgrade CNN architecture to process RGBD input, output grayscale, with
7-channel layer inputs (RGBD + UV coords + grayscale).
Architecture changes:
- Inner layers: Conv2d(7→4) output RGBD
- Final layer: Conv2d(7→1) output grayscale
- All inputs normalized to [-1,1] for tanh activation
- Removed CoordConv2d in favor of unified 7-channel input
Training (train_cnn.py):
- SimpleCNN: 7→4 (inner), 7→1 (final) architecture
- Forward: Normalize RGBD/coords/gray to [-1,1]
- Weight export: array<array<f32, 8>, 36> (inner), array<f32, 8>, 9> (final)
- Dataset: Load RGBA (RGBD) input
Shaders (cnn_conv3x3.wgsl):
- Added cnn_conv3x3_7to4: 7-channel input → RGBD output
- Added cnn_conv3x3_7to1: 7-channel input → grayscale output
- Both normalize inputs and use flattened weight arrays
Documentation:
- CNN_EFFECT.md: Updated architecture, training, weight format
- CNN_RGBD_GRAYSCALE_SUMMARY.md: Implementation summary
- HOWTO.md: Added training command example
Next: Train with RGBD input data
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
|
|
|
|
Implements automatic layer chaining and generic framebuffer capture API for
multi-layer neural network effects with proper original input preservation.
Key changes:
- Effect::needs_framebuffer_capture() - generic API for pre-render capture
- MainSequence: auto-capture to "captured_frame" auxiliary texture
- CNNEffect: multi-layer support via layer_index/total_layers params
- seq_compiler: expands "layers=N" to N chained effect instances
- Shader: @binding(4) original_input available to all layers
- Training: generates layer switches and original input binding
- Blend: mix(original, result, blend_amount) uses layer 0 input
Timeline syntax: CNNEffect layers=3 blend=0.7
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
|
|
- Document coordinate-aware layer 0 architecture
- Add checkpointing examples and options table
- Consolidate training workflow with practical examples
- Clarify CoordConv2d usage and size impact
- Streamline training/README.md structure
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
|
|
- Save checkpoints every N epochs (--checkpoint-every)
- Resume from checkpoint (--resume)
- Store model, optimizer, epoch, loss, and architecture info
- Auto-create checkpoint directory
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
|
|
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
|
|
|
|
- Implement CoordConv2d custom layer accepting (x,y) patch center
- Split layer 0 weights: rgba_weights (9x mat4x4) + coord_weights (mat2x4)
- Add *_with_coord() functions to 3x3/5x5/7x7 convolution shaders
- Update training script to generate coordinate grid and export split weights
- Regenerate placeholder weights with new format
Size impact: +32B coord weights + ~100B shader code = +132B total
All 36 tests passing (100%)
handoff(Claude): CNN coordinate awareness implemented, ready for training
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
|