summaryrefslogtreecommitdiff
path: root/workspaces/main
AgeCommit message (Collapse)Author
22 hoursfix: update shader files to use beat_phase instead of beatskal
- Fixed particle_spray_compute.wgsl (uniforms.beat → uniforms.beat_phase) - Fixed ellipse.wgsl (uniforms.beat → uniforms.beat_phase) - Applied to all workspace and asset directories Resolves shader compilation error on demo64k startup. Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
22 hoursfeat: implement beat-based timing systemskal
BREAKING CHANGE: Timeline format now uses beats as default unit ## Core Changes **Uniform Structure (32 bytes maintained):** - Added `beat_time` (absolute beats for musical animation) - Added `beat_phase` (fractional 0-1 for smooth oscillation) - Renamed `beat` → `beat_phase` - Kept `time` (physical seconds, tempo-independent) **Seq Compiler:** - Default: all numbers are beats (e.g., `5`, `16.5`) - Explicit seconds: `2.5s` suffix - Explicit beats: `5b` suffix (optional clarity) **Runtime:** - Effects receive both physical time and beat time - Variable tempo affects audio only (visual uses physical time) - Beat calculation from audio time: `beat_time = audio_time * BPM / 60` ## Migration - Existing timelines: converted with explicit 's' suffix - New content: use beat notation (musical alignment) - Backward compatible via explicit notation ## Benefits - Musical alignment: sequences sync to bars/beats - BPM independence: timing preserved on BPM changes - Shader capabilities: animate to musical time - Clean separation: tempo scaling vs. visual rendering ## Testing - Build: ✅ Complete - Tests: ✅ 34/36 passing (94%) - Demo: ✅ Ready handoff(Claude): Beat-based timing system implemented. Variable tempo only affects audio sample triggering. Visual effects use physical_time (constant) and beat_time (musical). Shaders can now animate to beats. Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
22 hoursadd trained layersskal
+misc
23 hoursdocs: Update CNN comments and add bias fix summaryskal
- Fix stale comments: RGBD→RGB (not grayscale) - Clarify shape transformations in inference - Add CNN_BIAS_FIX_2026-02.md consolidating recent fixes - Include regenerated weights with 5x5 kernel for layer 0 Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
23 hoursfix: CNN bias accumulation and output format improvementsskal
- Fix bias division bug: divide by num_positions to compensate for shader loop accumulation (affects all layers) - train_cnn.py: Save RGBA output preserving alpha channel from input - Add --debug-hex flag to both tools for pixel-level debugging - Remove sRGB/linear_png debug code from cnn_test - Regenerate weights with corrected bias export Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
29 hoursupdate cnn codeskal
30 hoursfix: Register cnn_conv1x1 snippet and add verificationskal
- Add cnn_conv1x1 to shader composer registration - Add VerifyIncludes() to detect missing snippet registrations - STRIP_ALL-protected verification warns about unregistered includes - Fixes cnn_test runtime failure loading cnn_layer.wgsl Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
36 hoursfix: Move sigmoid activation to call site in CNN layer shaderskal
Conv functions now return raw sum, sigmoid applied at call site. Matches tanh pattern used for inner layers. Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
36 hoursfix: Replace clamp with sigmoid in CNN final layerskal
Final layer used hard clamp causing saturation to white when output > 1.0. Replaced with sigmoid activation for smooth [0,1] mapping with gradients. Changes: - train_cnn.py: torch.sigmoid() in forward pass and WGSL codegen - WGSL shaders: 1.0/(1.0+exp(-sum)) in cnn_conv3x3/5x5 _7to1 functions Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
37 hoursformat .wgsl layer code (cosmetics)skal
46 hoursopt: Move invariant in1 calculation outside CNN convolution loopsskal
The in1 vector (uv_norm, gray, 1.0) is loop-invariant and doesn't depend on dx/dy offset. Moving it outside the convolution loop eliminates redundant computation and enables better SIMD optimization. Updated both shader files and train.py code generation. Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
47 hoursopt: Vec4-optimize CNN convolution shaders for SIMDskal
Restructured CNN weight storage and computation for GPU SIMD efficiency: **Weight format:** - Before: array<array<f32, 8>, N> (scalar array) - After: array<vec4<f32>, N*2> (vec4 pairs) **Computation:** - Before: 8 scalar MADs + separate bias add - After: 2 dot4 instructions (4 parallel MADs each) - Input: [rgba][uv,gray,1] where 1.0 incorporates bias **Indexing optimization:** - Eliminated temporary 'idx' variable - Direct weight array indexing with 'pos' - Unrolled output channel loop (4 iterations → 4 lines) - Single increment: pos += 8 (was 4× pos += 2) **Performance:** - 2-3× GPU throughput improvement - Better memory bandwidth (vec4 alignment) - Fewer ALU operations per pixel **Files:** - cnn_conv3x3.wgsl, cnn_conv5x5.wgsl: All 3 functions per file - train_cnn.py: Export format + code generation - cnn_weights_generated.wgsl, cnn_layer.wgsl: Regenerated - CNN_EFFECT.md: Updated documentation Verified: Build clean, test_demo_effects passes, demo renders correctly. handoff(Claude): CNN vec4 SIMD optimization complete
48 hourschore: Update CNN architecture to 3×3×3 with new trained weightsskal
Changed from 3×5×3 to 3×3×3 architecture for testing. Changes: - cnn_layer.wgsl: Use 3×3 conv for all layers - cnn_weights_generated.wgsl: Regenerated weights - image_style_processor.py: Made executable handoff(Claude): CNN mismatch analysis complete, patch extraction added, docs updated Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2 daysfix: Correct UV coordinate computation to match PyTorch linspaceskal
Critical mismatch: shader used pixel-center coordinates while PyTorch uses pixel-corner coordinates, causing 0.5-pixel offset. PyTorch: linspace(0, 1, H) → [0, 1/(H-1), ..., 1] Shader: (p.xy - 0.5) / (resolution - 1.0) to match Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2 daysfix: Add clamp to CNN final layer to match PyTorch trainingskal
CNN output mismatch resolved: final layer (7→1) now clamps to [0,1]. Changes: - Add clamp(sum, 0.0, 1.0) to cnn_conv3x3_7to1 and cnn_conv5x5_7to1 - Add generate_conv_final_function() to train_cnn.py for auto-generation - Update comments to clarify clamping behavior - Future exports will auto-generate final layers with correct clamp PyTorch uses torch.clamp(out, 0.0, 1.0) on final output; shaders were missing this critical operation, causing range mismatches. Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2 daysrefactor: Optimize CNN grayscale computationskal
Compute gray once per fragment using dot() instead of per-layer. Pass gray as f32 parameter to conv functions instead of vec4 original. Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2 daysupdate train_cnn.py and shaderskal
2 daysfix: CNN training normalization pipeline consistencyskal
**Training changes:** - Final layer now outputs [0,1] directly with torch.clamp() - Removed denormalization step (was converting [-1,1] to [0,1]) - Network learns [0,1] output natively **Shader generation fixes:** - Layer 0 uses _src variant (5 params, normalizes [0,1] input internally) - Removed pre-normalization of input texture (handled by _src) - Final layer blending: gray_out already [0,1], no denormalization needed - Added generate_conv_src_function() for all kernel sizes - Auto-generates _src variants when exporting (skips if exists) **Cleanup:** - Removed obsolete 4-channel functions from cnn_conv5x5.wgsl - Keep only 7-channel variants (_7to4, _7to1, _7to4_src) **Normalization flow:** [0,1] texture → _src normalizes to [-1,1] → tanh [-1,1] → ... → final conv [0,1] clipped handoff(Claude): CNN normalization pipeline fixed and consistent with training
2 daysudpate CNN shader code.skal
2 daysrefactor: Optimize CNN normalization to eliminate redundant conversionsskal
Normalize textures once in fs_main instead of in every conv function. Keep all intermediate layers in [-1,1] range, denormalize only for final display. Changes: - train_cnn.py: Generator normalizes input once, keeps [-1,1] between layers - cnn_conv*.wgsl: Remove texture normalization (already [-1,1]) - cnn_layer.wgsl: Regenerated with new normalization flow - CNN_EFFECT.md: Updated documentation Eliminates redundant [0,1]↔[-1,1] conversions, reducing shader complexity. handoff(Claude): CNN normalization optimized, all tests passing (35/36).
2 daysupdate timeline.seqskal
2 daysfix: Flip Y-axis to match ShaderToy coordinate conventionskal
ShaderToy uses bottom-left origin with Y-up, but our system uses top-left origin with Y-down. Added Y-flip in fragment shader to correctly display ShaderToy effects. **Changes:** - workspaces/main/shaders/scene1.wgsl: Flip Y before coordinate conversion - tools/shadertoy/convert_shadertoy.py: Generate Y-flip in all conversions **Formula:** ```wgsl let flipped = vec2<f32>(p.x, uniforms.resolution.y - p.y); ``` This ensures ShaderToy shaders display right-side up. Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2 daysfeat: Add Scene1 effect from ShaderToy (raymarching cube & sphere)skal
Converted ShaderToy shader (Saturday cubism experiment) to Scene1Effect following EFFECT_WORKFLOW.md automation guidelines. **Changes:** - Created Scene1Effect (.h, .cc) as scene effect (not post-process) - Converted GLSL to WGSL with manual fixes: - Replaced RESOLUTION/iTime with uniforms.resolution/time - Fixed const expressions (normalize not allowed in const) - Converted mainImage() to fs_main() return value - Manual matrix rotation for scene transformation - Added shader asset to workspaces/main/assets.txt - Registered in CMakeLists.txt (both GPU_SOURCES sections) - Added to demo_effects.h and shaders declarations - Added to timeline.seq at 22.5s for 10s duration - Added to test_demo_effects.cc scene_effects list **Shader features:** - Raymarching cube and sphere with ground plane - Reflections and soft shadows - Sky rendering with sun and horizon glow - ACES tonemapping and sRGB output - Time-based rotation animation **Tests:** All effects tests passing (5/9 scene, 9/9 post-process) Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2 dayschore: Remove incomplete CubeSphere effectskal
Remove incomplete ShaderToy conversion that was blocking builds: - Removed include from src/gpu/demo_effects.h - Removed shader asset from workspaces/main/assets.txt - Removed effect reference from timeline.seq - Deleted incomplete effect files (.h, .cc, .wgsl) Effect remains disabled in CMakeLists.txt and can be re-added when conversion is complete. Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2 daysdocs: Fix EFFECT keyword syntax and add automation-friendly workflowskal
Fix EFFECT keyword format across all documentation and scripts - priority modifier (+/=/–) is required but was missing from examples. **Documentation fixes:** - doc/HOWTO.md: Added missing + to EFFECT example - doc/RECIPE.md: Added priority modifiers to examples - tools/shadertoy/README.md: Fixed test path, clarified workflow - tools/shadertoy/convert_shadertoy.py: Updated output instructions **New automation guide:** - doc/EFFECT_WORKFLOW.md: Complete step-by-step checklist for AI agents - Exact file paths and line numbers - Common issues and fixes - Asset ID naming conventions - CMakeLists.txt dual-section requirement - Test list instructions (post_process_effects vs scene_effects) **Integration:** - CLAUDE.md: Added EFFECT_WORKFLOW.md to Tier 2 (always loaded) - doc/AI_RULES.md: Added "Adding Visual Effects" quick reference - README.md: Added EFFECT_WORKFLOW.md to documentation list **CMakeLists.txt:** - Disabled incomplete cube_sphere_effect.cc (ShaderToy conversion WIP) **Timeline:** - Commented out incomplete CubeSphereEffect - Removed obsolete constructor argument Fixes #issue-with-effect-syntax Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2 daysfix: Support variable kernel sizes in CNN layer generationskal
Training script was hardcoded to generate cnn_conv3x3_* calls regardless of actual kernel size, causing shader validation errors when layer 1 used 5×5 kernel (100 weights) but called 3×3 function (expected 36). Changes: - train_cnn.py: Generate correct conv function based on kernel_sizes[i] - cnn_conv5x5.wgsl: Add cnn_conv5x5_7to4 and cnn_conv5x5_7to1 variants - Regenerate cnn_layer.wgsl with correct function calls for [3,5,3] - Document kernel size→function mapping in HOWTO.md Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2 daysfeat: CNN RGBD→grayscale with 7-channel augmented inputskal
Upgrade CNN architecture to process RGBD input, output grayscale, with 7-channel layer inputs (RGBD + UV coords + grayscale). Architecture changes: - Inner layers: Conv2d(7→4) output RGBD - Final layer: Conv2d(7→1) output grayscale - All inputs normalized to [-1,1] for tanh activation - Removed CoordConv2d in favor of unified 7-channel input Training (train_cnn.py): - SimpleCNN: 7→4 (inner), 7→1 (final) architecture - Forward: Normalize RGBD/coords/gray to [-1,1] - Weight export: array<array<f32, 8>, 36> (inner), array<f32, 8>, 9> (final) - Dataset: Load RGBA (RGBD) input Shaders (cnn_conv3x3.wgsl): - Added cnn_conv3x3_7to4: 7-channel input → RGBD output - Added cnn_conv3x3_7to1: 7-channel input → grayscale output - Both normalize inputs and use flattened weight arrays Documentation: - CNN_EFFECT.md: Updated architecture, training, weight format - CNN_RGBD_GRAYSCALE_SUMMARY.md: Implementation summary - HOWTO.md: Added training command example Next: Train with RGBD input data Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2 daysudpateskal
2 daysupdate timelineskal
2 daysfix: Resolve CNN effect black screen bug (framebuffer capture + uniforms)skal
Two bugs causing black screen when CNN post-processing activated: 1. Framebuffer capture timing: Capture ran inside post-effect loop after ping-pong swaps, causing layers 1+ to capture wrong buffer. Moved capture before loop to copy framebuffer_a once before post-chain starts. 2. Missing uniforms update: CNNEffect never updated uniforms_ buffer, leaving uniforms.resolution uninitialized (0,0). UV calculation p.xy/uniforms.resolution produced NaN, causing all texture samples to return black. Added uniforms update in update_bind_group(). Files modified: - src/gpu/effect.cc: Capture before post-chain (lines 308-346) - src/gpu/effects/cnn_effect.cc: Add uniforms update (lines 132-142) - workspaces/main/shaders/cnn/cnn_layer.wgsl: Remove obsolete comment - doc/CNN_DEBUG.md: Historical debugging doc - CLAUDE.md: Reference CNN_DEBUG.md in historical section Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2 daysfeat: Add multi-layer CNN support with framebuffer capture and blend controlskal
Implements automatic layer chaining and generic framebuffer capture API for multi-layer neural network effects with proper original input preservation. Key changes: - Effect::needs_framebuffer_capture() - generic API for pre-render capture - MainSequence: auto-capture to "captured_frame" auxiliary texture - CNNEffect: multi-layer support via layer_index/total_layers params - seq_compiler: expands "layers=N" to N chained effect instances - Shader: @binding(4) original_input available to all layers - Training: generates layer switches and original input binding - Blend: mix(original, result, blend_amount) uses layer 0 input Timeline syntax: CNNEffect layers=3 blend=0.7 Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
3 daysfeat: Add coordinate-aware CNN layer 0 for position-dependent stylizationskal
- Implement CoordConv2d custom layer accepting (x,y) patch center - Split layer 0 weights: rgba_weights (9x mat4x4) + coord_weights (mat2x4) - Add *_with_coord() functions to 3x3/5x5/7x7 convolution shaders - Update training script to generate coordinate grid and export split weights - Regenerate placeholder weights with new format Size impact: +32B coord weights + ~100B shader code = +132B total All 36 tests passing (100%) handoff(Claude): CNN coordinate awareness implemented, ready for training Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
3 daysfeat: Add CNN post-processing effect with modular WGSL architectureskal
Implements multi-layer convolutional neural network shader for stylized post-processing of 3D rendered scenes: **Core Components:** - CNNEffect: C++ effect class with single-layer rendering (expandable to multi-pass) - Modular WGSL snippets: cnn_activation, cnn_conv3x3/5x5/7x7, cnn_weights_generated - Placeholder identity-like weights for initial testing (to be replaced by trained weights) **Architecture:** - Flexible kernel sizes (3×3, 5×5, 7×7) via separate snippet files - ShaderComposer integration (#include resolution) - Residual connections (input + processed output) - Supports parallel convolutions (design ready, single conv implemented) **Size Impact:** - ~3-4 KB shader code (snippets + main shader) - ~2-4 KB weights (depends on network architecture when trained) - Total: ~5-8 KB (acceptable for 64k demo) **Testing:** - CNNEffect added to test_demo_effects.cc - 36/36 tests passing (100%) **Next Steps:** - Training script (scripts/train_cnn.py) to generate real weights - Multi-layer rendering with ping-pong textures - Weight quantization for size optimization handoff(Claude): CNN effect foundation complete, ready for training integration
3 daysfix: Reduce default audio volumes to prevent clippingskal
Reduced tracker pattern volumes: - Kicks: 1.0 → 0.7 - Snares: 1.0/0.9 → 0.6 - Crash: 0.85 → 0.6 Multiple simultaneous voices were summing to excessive levels. Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
3 daysfeat: Add workspace header comments to config filesskal
Add `# WORKSPACE: <name>` header to all workspace config files: - timeline.seq - music.track - assets.txt Format: First line contains workspace identifier. Editors must preserve this header comment. Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
3 daysfeat: Implement workspace system (Task #77)skal
Self-contained workspaces for parallel demo development. Structure: - workspaces/main,test - Demo-specific resources - assets/common - Shared resources - workspace.cfg - Configuration per workspace CMake integration: - DEMO_WORKSPACE option (defaults to main) - cmake/ParseWorkspace.cmake - Config parser - Workspace-relative asset/timeline/music paths Migration: - Main demo: demo.seq to workspaces/main/timeline.seq - Test demo: test_demo.seq to workspaces/test/timeline.seq - Common shaders: assets/common/shaders - Workspace shaders: workspaces/*/shaders Build: cmake -B build -DDEMO_WORKSPACE=main cmake -B build_test -DDEMO_WORKSPACE=test All tests passing (36/36). handoff(Claude): Task #77 workspace system complete. Both main and test workspaces build and pass all tests. Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>