| Age | Commit message (Collapse) | Author |
|
Training changes:
- Changed p3 default depth from 0.0 to 1.0 (far plane semantics)
- Extract depth from target alpha channel in both datasets
- Consistent alpha-as-depth across training/validation
Test tool enhancements (cnn_test):
- Added load_depth_from_alpha() for R32Float depth texture
- Fixed bind group layout for UnfilterableFloat sampling
- Added --save-intermediates with per-channel grayscale composites
- Each layer saved as 4x wide PNG (p0-p3 stacked horizontally)
- Global layers_composite.png for vertical layer stack overview
Investigation notes:
- Static features p4-p7 ARE computed and bound correctly
- Sin_20_y pattern visibility difference between tools under investigation
- Binary weights timestamp (Feb 13 20:36) vs HTML tool (Feb 13 22:12)
- Next: Update HTML tool with canonical binary weights
handoff(Claude): HTML tool weights update pending - base64 encoded
canonical weights ready in /tmp/weights_b64.txt for line 392 replacement.
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
|
|
Binary format v2 includes mip_level in header (20 bytes, was 16).
Effect reads mip_level and passes to static features shader via uniform.
Shader samples from correct mip texture based on mip_level.
Changes:
- export_cnn_v2_weights.py: Header v2 with mip_level field
- cnn_v2_effect.h: Add StaticFeatureParams, mip_level member, params buffer
- cnn_v2_effect.cc: Read mip_level from weights, create/bind params buffer, update per-frame
- cnn_v2_static.wgsl: Accept params uniform, sample from selected mip level
Binary format v2:
- Header: 20 bytes (magic, version=2, num_layers, total_weights, mip_level)
- Backward compatible: v1 weights load with mip_level=0
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
|
|
Updated comments to clarify that per-layer kernel sizes are supported.
Code already handles this correctly via LayerInfo.kernel_size field.
Changes:
- cnn_v2_effect.h: Add comment about per-layer kernel sizes
- cnn_v2_compute.wgsl: Clarify LayerParams provides per-layer config
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
|
|
- Add --cnn-version <1|2> flag to select between CNN v1 and v2
- Implement beat_phase modulation for dynamic blend in both CNN effects
- Fix CNN v2 per-layer uniform buffer sharing (each layer needs own buffer)
- Fix CNN v2 y-axis orientation to match render pass convention
- Add Scene1Effect as base visual layer to test_demo timeline
- Reorganize CNN v2 shaders into cnn_v2/ subdirectory
- Update asset paths and documentation for new shader organization
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
|
|
- Add binary weight format (header + layer info + packed f16)
- New export_cnn_v2_weights.py for binary weight export
- Single cnn_v2_compute.wgsl shader with storage buffer
- Load weights in CNNv2Effect::load_weights()
- Create layer compute pipeline with 5 bindings
- Fast training config: 100 epochs, 3×3 kernels, 8→4→4 channels
Next: Complete bind group creation and multi-layer compute execution
|
|
Complete multi-pass compute execution for CNNv2Effect.
Implementation:
- Layer texture creation (ping-pong buffers for intermediate results)
- Static features compute pipeline with bind group layout
- Bind group creation with 5 bindings (input mips + depth + output)
- compute() override for multi-pass execution
- Static features pass with proper workgroup dispatch
Architecture:
- Static features: 8×f16 packed as 4×u32 (RGBD + UV + sin + bias)
- Layer buffers: 2×RGBA32Uint textures (8 channels f16 each)
- Input mips: 3 levels (0, 1, 2) for multi-scale features
- Workgroup size: 8×8 threads
Status:
- Static features compute pass functional
- Layer pipeline infrastructure ready
- All 36/36 tests passing
Next: Layer shader integration, multi-layer execution
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
|
|
Infrastructure for enhanced CNN post-processing with 7D feature input.
Phase 1: Shaders
- Static features compute (RGBD + UV + sin10_x + bias → 8×f16)
- Layer template (convolution skeleton, packing/unpacking)
- 3 mip level support for multi-scale features
Phase 2: C++ Effect
- CNNv2Effect class (multi-pass architecture)
- Texture management (static features, layer buffers)
- Build integration (CMakeLists, assets, tests)
Phase 3: Training Pipeline
- train_cnn_v2.py: PyTorch model with static feature concatenation
- export_cnn_v2_shader.py: f32→f16 quantization, WGSL generation
- Configurable architecture (kernels, channels)
Phase 4: Validation
- validate_cnn_v2.sh: End-to-end pipeline
- Checkpoint → shaders → build → test images
Tests: 36/36 passing
Next: Complete render pipeline implementation (bind groups, multi-pass)
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
|