summaryrefslogtreecommitdiff
path: root/workspaces/main/shaders/cnn_v2
AgeCommit message (Collapse)Author
31 hoursCNN v2: Add mip-level support to runtime effectskal
Binary format v2 includes mip_level in header (20 bytes, was 16). Effect reads mip_level and passes to static features shader via uniform. Shader samples from correct mip texture based on mip_level. Changes: - export_cnn_v2_weights.py: Header v2 with mip_level field - cnn_v2_effect.h: Add StaticFeatureParams, mip_level member, params buffer - cnn_v2_effect.cc: Read mip_level from weights, create/bind params buffer, update per-frame - cnn_v2_static.wgsl: Accept params uniform, sample from selected mip level Binary format v2: - Header: 20 bytes (magic, version=2, num_layers, total_weights, mip_level) - Backward compatible: v1 weights load with mip_level=0 Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
35 hoursCNNv2Effect: Document per-layer kernel sizes supportskal
Updated comments to clarify that per-layer kernel sizes are supported. Code already handles this correctly via LayerInfo.kernel_size field. Changes: - cnn_v2_effect.h: Add comment about per-layer kernel sizes - cnn_v2_compute.wgsl: Clarify LayerParams provides per-layer config Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
35 hoursCNN v2: Refactor to uniform 12D→4D architectureskal
**Architecture changes:** - Static features (8D): p0-p3 (parametric) + uv_x, uv_y, sin(10×uv_x), bias - Input RGBD (4D): fed separately to all layers - All layers: uniform 12D→4D (4 prev/input + 8 static → 4 output) - Bias integrated in static features (bias=False in PyTorch) **Weight calculations:** - 3 layers × (12 × 3×3 × 4) = 1296 weights - f16: 2.6 KB (vs old variable arch: ~6.4 KB) **Updated files:** *Training (Python):* - train_cnn_v2.py: Uniform model, takes input_rgbd + static_features - export_cnn_v2_weights.py: Binary export for storage buffers - export_cnn_v2_shader.py: Per-layer shader export (debugging) *Shaders (WGSL):* - cnn_v2_static.wgsl: p0-p3 parametric features (mips/gradients) - cnn_v2_compute.wgsl: 12D input, 4D output, vec4 packing *Tools:* - HTML tool (cnn_v2_test): Updated for 12D→4D, layer visualization *Docs:* - CNN_V2.md: Updated architecture, training, validation sections - HOWTO.md: Reference HTML tool for validation *Removed:* - validate_cnn_v2.sh: Obsolete (used CNN v1 tool) All code consistent with bias=False (bias in static features as 1.0). handoff(Claude): CNN v2 architecture finalized and documented
2 daystest_demo: Add beat-synchronized CNN post-processing with version selectionskal
- Add --cnn-version <1|2> flag to select between CNN v1 and v2 - Implement beat_phase modulation for dynamic blend in both CNN effects - Fix CNN v2 per-layer uniform buffer sharing (each layer needs own buffer) - Fix CNN v2 y-axis orientation to match render pass convention - Add Scene1Effect as base visual layer to test_demo timeline - Reorganize CNN v2 shaders into cnn_v2/ subdirectory - Update asset paths and documentation for new shader organization Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>