| Age | Commit message (Collapse) | Author |
|
Complete v1→v2 migration cleanup: rename 29 files (sequence_v2→sequence, effect_v2→effect, 14 effect files, 8 shaders, compiler, docs), update all class names and references across 54 files. Archive v1 timeline. System now uses standard naming with all versioning removed. 30/34 tests passing.
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
|
|
- Add Hybrid3DEffectV2 with Renderer3D integration
- Simplified scene (1 center cube + 8 surrounding objects)
- Use NodeRegistry for depth buffer
- Update timeline_v2.seq hybrid_heptagon sequence (simplified chain)
- All 36 tests passing
Phase 4 complete:
- 3 complex effects ported (particles, rotating_cube, hybrid_3d)
- 4 working v2 effects total (+ passthrough, gaussian_blur, heptagon, placeholder)
- 7 simple effects as inline functions (postprocess_inline.wgsl)
- V2 timeline integrated with build system
- All sequences functional with v2 effects
handoff(Claude): Phase 4 effect ports complete
|
|
- Add RotatingCubeEffectV2 with 3D rendering + depth buffer
- Create rotating_cube_v2.wgsl (hardcoded cube geometry)
- Simplified: no auxiliary mask texture dependency
- Declare depth node via NodeRegistry
- Update timeline_v2.seq rotating_cube sequence
- Add shader exports to shaders.{h,cc}
- All 36 tests passing
handoff(Claude): RotatingCube v2 complete, hybrid_3d next
|
|
- Add ParticlesEffectV2 with compute + render passes
- Create particle_compute_v2.wgsl and particle_render_v2.wgsl
- Use UniformsSequenceParams for beat-synchronized particles
- Update timeline_v2.seq particles sequence (simplified 2-effect chain)
- Add shader exports to shaders.{h,cc}
- All 36 tests passing
handoff(Claude): Particles v2 complete, rotating_cube next
|
|
- Update main workspace to use timeline_v2.seq
- Add SEQ_COMPILER_V2 using Python script (seq_compiler_v2.py)
- Update DemoCodegen to use v2 compiler for main timeline
- Add v1 compatibility stubs (LoadTimeline, GetDemoDuration)
- Demo builds and links successfully
- All tests passing (36/36)
V2 timeline now integrated into build pipeline. Stub functions allow
linking while proper MainSequence v2 integration is pending.
handoff(Claude): V2 timeline integrated, ready for effect ports
|
|
- Add PlaceholderEffectV2 for unported effects (logs TODO warning)
- Create timeline_v2.seq with 8 sequences using v2 syntax
- Explicit node routing (source -> temp1 -> temp2 -> sink)
- Uses: HeptagonEffectV2, GaussianBlurEffectV2, PlaceholderEffectV2
- Compiler generates valid C++ for all sequences
- All tests passing (36/36)
Timeline structure validated. Placeholders allow demo to run while
complex effects (rotating_cube, hybrid_3d, particles) await porting.
handoff(Claude): V2 timeline operational, ready for MainSequence integration
|
|
- Create postprocess_inline.wgsl with 7 inline effect functions
- Functions: vignette, flash, fade, theme, solarize, chroma_aberration, distort
- Add example combined_postprocess_v2.wgsl showing usage
- Register postprocess_inline snippet with ShaderComposer
- Add to main and test workspace assets
- All tests passing (36/36)
Strategy: Simple effects become inline functions instead of separate classes.
Complex effects (rotating_cube, hybrid_3d, particles) remain as TODO for v2 port.
handoff(Claude): Inline functions ready, 7 simple effects consolidated
|
|
- Create v2-compatible WGSL shaders with UniformsSequenceParams
- Add sequence_v2_uniforms snippet for ShaderComposer
- Port 3 effects: PassthroughEffectV2, GaussianBlurEffectV2, HeptagonEffectV2
- Enable and fix end-to-end test (test_sequence_v2_e2e)
- Fix shader binding order (sampler at 0, texture at 1)
- Fix WebGPU validation (maxAnisotropy=1, explicit depthSlice)
- Add v2 shaders to main and test workspace assets
- All tests passing (36/36)
handoff(Claude): Phase 3 complete, v2 effects functional, ready for phase 4
|
|
Renamed files and classes:
- cnn_effect.{h,cc} → cnn_v1_effect.{h,cc}
- CNNEffect → CNNv1Effect
- CNNEffectParams → CNNv1EffectParams
- CNNLayerParams → CNNv1LayerParams
- CNN_EFFECT.md → CNN_V1_EFFECT.md
Updated all references:
- C++ includes and class usage
- CMake source list
- Timeline (workspaces/main/timeline.seq)
- Test file (test_demo_effects.cc)
- Documentation (CLAUDE.md, PROJECT_CONTEXT.md, READMEs)
Tests: 34/34 passing (100%)
|
|
Consolidate CNN v1 (CNNEffect) into dedicated directory:
- C++ effect: src/effects → cnn_v1/src/
- Shaders: workspaces/main/shaders/cnn → cnn_v1/shaders/
- Training: training/train_cnn.py → cnn_v1/training/
- Docs: doc/CNN*.md → cnn_v1/docs/
Updated all references:
- CMake source list
- C++ includes (relative paths: ../../cnn_v1/src/)
- Asset paths (../../cnn_v1/shaders/)
- Documentation cross-references
CNN v1 remains active in timeline. For new work, use CNN v2 with
enhanced features (7D static, storage buffer, sigmoid activation).
Tests: 34/34 passing (100%)
|
|
Move all CNN v2 files to dedicated cnn_v2/ directory to prepare for CNN v3 development. Zero functional changes.
Structure:
- cnn_v2/src/ - C++ effect implementation
- cnn_v2/shaders/ - WGSL shaders (6 files)
- cnn_v2/weights/ - Binary weights (3 files)
- cnn_v2/training/ - Python training scripts (4 files)
- cnn_v2/scripts/ - Shell scripts (train_cnn_v2_full.sh)
- cnn_v2/tools/ - Validation tools (HTML)
- cnn_v2/docs/ - Documentation (4 markdown files)
Changes:
- Update CMake source list to cnn_v2/src/cnn_v2_effect.cc
- Update assets.txt with relative paths to cnn_v2/
- Update includes to ../../cnn_v2/src/cnn_v2_effect.h
- Add PROJECT_ROOT resolution to Python/shell scripts
- Update doc references in HOWTO.md, TODO.md
- Add cnn_v2/README.md
Verification: 34/34 tests passing, demo runs correctly.
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
|
|
|
|
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
|
|
|
|
SDFTestEffect was failing with undefined dfWithID error. The raymarching.wgsl
include requires dfWithID even for single-pass effects. Added dummy implementation
that wraps df() for compatibility.
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
|
|
Add unified camera system for SDF raymarching effects:
- CameraParams struct (80 bytes): inv_view matrix + FOV/near/far/aspect
- SDFEffect base class: manages camera uniform, provides update_camera() helpers
- camera_common.wgsl: getCameraRay(), position/forward/up/right extractors
- SDFTestEffect: working example with orbiting camera + animated sphere
Refactor effect headers:
- Extract class definitions from demo_effects.h to individual .h files
- Update includes in .cc files to use specific headers
- Cleaner compilation dependencies, faster incremental builds
Documentation:
- Add SDF_EFFECT_GUIDE.md with complete workflow
- Update ARCHITECTURE.md, UNIFORM_BUFFER_GUIDELINES.md
- Update EFFECT_WORKFLOW.md, CONTRIBUTING.md
Tests: 34/34 passing, SDFTestEffect validated
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
|
|
|
|
Merge sdf_primitives.wgsl into math/sdf_shapes.wgsl to eliminate
duplication and establish single source of truth for all SDF functions.
Changes:
- Delete common/shaders/sdf_primitives.wgsl (duplicate of math/sdf_shapes.wgsl)
- Add sdBox2D() and sdEllipse() to math/sdf_shapes.wgsl
- Update ellipse.wgsl (main/test) to use #include "math/sdf_shapes"
- Update scene1.wgsl to use math/sdf_shapes instead of sdf_primitives
- Rename asset SHADER_SDF_PRIMITIVES → SHADER_SDF_SHAPES
- Update shader registration and tests
Impact:
- ~60 lines eliminated from ellipse shaders
- Single source for 3D primitives (sphere, box, torus, plane) and 2D (box, ellipse)
- Consistent include path across codebase
All tests passing (34/34).
handoff(Claude): SDF shapes consolidated to math/sdf_shapes.wgsl
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
|
|
Replace duplicate fullscreen triangle vertex shader code with
#include "render/fullscreen_vs" in 8 workspace shaders. Eliminates
~60 lines of duplication and establishes single source of truth.
Modified shaders:
- circle_mask_compute.wgsl (main/test)
- circle_mask_render.wgsl (main/test)
- ellipse.wgsl (main/test)
- gaussian_blur.wgsl (main/test)
Updated test_shader_assets.cc to validate include directive instead
of inline @vertex keyword for affected shaders.
All tests passing (34/34).
handoff(Claude): Shader modularization - fullscreen_vs consolidated
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
|
|
Extracted common WGSL functions into separate files in `common/shaders/` to improve reusability and maintainability.
- Created `common/shaders/render/fullscreen_vs.wgsl` for a reusable fullscreen vertex shader.
- Created `common/shaders/math/color.wgsl` for color conversion and tone mapping functions.
- Created `common/shaders/math/utils.wgsl` for general math utilities.
- Created `common/shaders/render/raymarching.wgsl` for SDF raymarching logic.
- Updated multiple shaders to use these new common snippets via `#include`.
- Fixed the shader asset validation test to correctly handle shaders that include the common vertex shader.
This refactoring makes the shader code more modular and easier to manage.
|
|
Replace textureSample() with textureSampleLevel() in compute shader.
textureSample() requires derivative calculations only available in fragment
shaders. Compute shaders must explicitly specify mip level.
Fixes: DemoEffectsTest CNNv2Effect initialization
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
|
|
**CNN v2 Changes:**
- Replace point sampling with bilinear interpolation for mip-level features
- Add linear sampler (binding 6) to static features shader
- Update CNNv2Effect, cnn_test, and HTML tool
**HTML Tool UI:**
- Move controls to floating bottom bar in central view
- Consolidate video controls + Blend/Depth/Save PNG in single container
- Increase left panel width: 300px → 315px (+5%)
- Remove per-frame debug messages (visualization, rendering logs)
**Technical:**
- WGSL: textureSample() with linear_sampler vs textureLoad()
- C++: Create WGPUSampler with Linear filtering
- HTML: Change sampler from 'nearest' to 'linear'
handoff(Claude): CNN v2 now uses bilinear mip-level sampling across all tools
|
|
Fixes training collapse where p1/p2 channels saturate due to gradient
blocking at clamp boundaries. Sigmoid provides smooth [0,1] mapping
with continuous gradients.
Changes:
- Layer 0: clamp(x, 0, 1) → sigmoid(x)
- Final layer: clamp(x, 0, 1) → sigmoid(x)
- Middle layers: ReLU unchanged (already stable)
Updated files:
- training/train_cnn_v2.py: PyTorch model activations
- workspaces/main/shaders/cnn_v2/cnn_v2_compute.wgsl: WGSL shader
- tools/cnn_v2_test/index.html: HTML validation tool
- doc/CNN_V2.md: Documentation
Validation:
- Build clean (no shader errors)
- 34/36 tests pass (2 unrelated script tests fail)
- 10-epoch training: loss 0.153 → 0.088 (good convergence)
- cnn_test processes images successfully
Breaking change: Old checkpoints trained with clamp() incompatible.
Retrain from scratch required.
handoff(Claude): CNN v2 sigmoid activation implemented and validated.
|
|
Updated gen_identity_weights.py --mix mode to use static features
p4-p7 (uv_x, uv_y, sin20_y, bias) at channels 8-11 instead of
p0-p3 (RGB+D) at channels 4-7.
Before: 0.5*prev[i] + 0.5*static_p{i} (channels 4-7)
After: 0.5*prev[i] + 0.5*static_p{4+i} (channels 8-11)
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
|
|
Training changes:
- Changed p3 default depth from 0.0 to 1.0 (far plane semantics)
- Extract depth from target alpha channel in both datasets
- Consistent alpha-as-depth across training/validation
Test tool enhancements (cnn_test):
- Added load_depth_from_alpha() for R32Float depth texture
- Fixed bind group layout for UnfilterableFloat sampling
- Added --save-intermediates with per-channel grayscale composites
- Each layer saved as 4x wide PNG (p0-p3 stacked horizontally)
- Global layers_composite.png for vertical layer stack overview
Investigation notes:
- Static features p4-p7 ARE computed and bound correctly
- Sin_20_y pattern visibility difference between tools under investigation
- Binary weights timestamp (Feb 13 20:36) vs HTML tool (Feb 13 22:12)
- Next: Update HTML tool with canonical binary weights
handoff(Claude): HTML tool weights update pending - base64 encoded
canonical weights ready in /tmp/weights_b64.txt for line 392 replacement.
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
|
|
Fix two issues causing validation errors in test_demo:
1. Remove redundant pipeline creation without layout (static_pipeline_)
2. Change vec3<u32> to 3× u32 fields in StaticFeatureParams struct
WGSL vec3<u32> aligns to 16 bytes (std140), making struct 32 bytes,
while C++ struct was 16 bytes. Explicit fields ensure consistent layout.
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
|
|
Update positional encoding to use vertical coordinate at higher frequency.
Changes:
- train_cnn_v2.py: sin10_x → sin20_y (computed from uv_y)
- cnn_v2_static.wgsl: sin10_x → sin20_y (computed from uv_y)
- index.html: sin10_x → sin20_y (STATIC_SHADER)
- CNN_V2.md: Update feature descriptions and examples
- CNN_V2_BINARY_FORMAT.md: Update static features documentation
Feature vector: [p0, p1, p2, p3, uv_x, uv_y, sin20_y, bias]
Rationale: Higher frequency (20 vs 10) + vertical axis provides better
spatial discrimination for position encoding.
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
|
|
Document future enhancement for arbitrary feature vector layouts.
Proposed feature descriptor in binary format v3:
- Specify feature types, sources, and ordering
- Enable runtime experimentation without shader recompilation
- Examples: [R,G,B,dx,dy,uv_x,bias] or [mip1.r,mip2.g,laplacian,uv_x,sin20_x,bias]
Added TODOs in:
- CNN_V2_BINARY_FORMAT.md: Detailed proposal with struct layout
- CNN_V2.md: Future extensions section
- train_cnn_v2.py: compute_static_features() docstring
- cnn_v2_static.wgsl: Shader header comment
- cnn_v2_effect.cc: Version check comment
Current limitation: Hardcoded [p0,p1,p2,p3,uv_x,uv_y,sin10_x,bias] layout.
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
|
|
Binary format v2 includes mip_level in header (20 bytes, was 16).
Effect reads mip_level and passes to static features shader via uniform.
Shader samples from correct mip texture based on mip_level.
Changes:
- export_cnn_v2_weights.py: Header v2 with mip_level field
- cnn_v2_effect.h: Add StaticFeatureParams, mip_level member, params buffer
- cnn_v2_effect.cc: Read mip_level from weights, create/bind params buffer, update per-frame
- cnn_v2_static.wgsl: Accept params uniform, sample from selected mip level
Binary format v2:
- Header: 20 bytes (magic, version=2, num_layers, total_weights, mip_level)
- Backward compatible: v1 weights load with mip_level=0
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
|
|
Added SHADER_SNIPPET_A and SHADER_SNIPPET_B entries to test assets
config to resolve missing AssetId compile error in test_shader_composer.
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
|
|
Fixes test_assets.cc compilation by adding missing test asset IDs and
procedural generators. Test-specific code is protected with DEMO_STRIP_ALL
to exclude from release builds.
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
|
|
Converted track.md drum notation to .track format and integrated as main music.
165 BPM high-energy pattern with syncopated kicks, 16th note hi-hats, and break.
- Add workspaces/main/pop_punk_drums.track (3 patterns, 4-bar sequence)
- Add workspaces/main/track.md (notation reference)
- Update workspace.cfg to use pop_punk_drums.track
- Update BPM to 165
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
|
|
Updated comments to clarify that per-layer kernel sizes are supported.
Code already handles this correctly via LayerInfo.kernel_size field.
Changes:
- cnn_v2_effect.h: Add comment about per-layer kernel sizes
- cnn_v2_compute.wgsl: Clarify LayerParams provides per-layer config
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
|
|
**Architecture changes:**
- Static features (8D): p0-p3 (parametric) + uv_x, uv_y, sin(10×uv_x), bias
- Input RGBD (4D): fed separately to all layers
- All layers: uniform 12D→4D (4 prev/input + 8 static → 4 output)
- Bias integrated in static features (bias=False in PyTorch)
**Weight calculations:**
- 3 layers × (12 × 3×3 × 4) = 1296 weights
- f16: 2.6 KB (vs old variable arch: ~6.4 KB)
**Updated files:**
*Training (Python):*
- train_cnn_v2.py: Uniform model, takes input_rgbd + static_features
- export_cnn_v2_weights.py: Binary export for storage buffers
- export_cnn_v2_shader.py: Per-layer shader export (debugging)
*Shaders (WGSL):*
- cnn_v2_static.wgsl: p0-p3 parametric features (mips/gradients)
- cnn_v2_compute.wgsl: 12D input, 4D output, vec4 packing
*Tools:*
- HTML tool (cnn_v2_test): Updated for 12D→4D, layer visualization
*Docs:*
- CNN_V2.md: Updated architecture, training, validation sections
- HOWTO.md: Reference HTML tool for validation
*Removed:*
- validate_cnn_v2.sh: Obsolete (used CNN v1 tool)
All code consistent with bias=False (bias in static features as 1.0).
handoff(Claude): CNN v2 architecture finalized and documented
|
|
Eliminates 36 duplicate shader files across workspaces.
Structure:
- common/shaders/{math,render,compute}/ - Shared utilities (20 files)
- workspaces/*/shaders/ - Workspace-specific only
Changes:
- Created common/shaders/ with math, render, compute subdirectories
- Moved 20 common shaders from workspaces to common/
- Removed duplicates from test workspace
- Updated assets.txt: ../../common/shaders/ references
- Enhanced asset_packer.cc: filesystem path normalization for ../ resolution
Implementation: Option 1 from SHADER_REUSE_INVESTIGATION.md
- Single source of truth for common code
- Workspace references via relative paths
- Path normalization in asset packer
handoff(Claude): Common shader directory implemented
|
|
Updated asset_dirs and shader_dirs to reflect reorganization:
- Removed legacy assets/ and ../common/ references
- Added new directories: music/, weights/, obj/
- Simplified shader_dirs to just shaders/
handoff(Claude): workspace.cfg files updated
|
|
Each workspace now has a weights/ directory to store binary weight files
from CNN training (e.g., cnn_v2_weights.bin).
Changes:
- Created workspaces/{main,test}/weights/
- Moved cnn_v2_weights.bin → workspaces/main/weights/
- Updated assets.txt reference
- Updated training scripts and export tool paths
handoff(Claude): Workspace weights/ directories added
|
|
Workspace structure now:
- workspaces/{main,test}/obj/ (3D models)
- workspaces/{main,test}/shaders/ (WGSL shaders)
- workspaces/{main,test}/music/ (audio samples)
Changes:
- Moved workspaces/*/assets/music/ → workspaces/*/music/
- Updated assets.txt paths (assets/music/ → music/)
- Moved test_demo.{seq,track} to tools/
- Moved assets/originals/ → tools/originals/
- Removed assets/common/ (legacy, duplicated in workspaces)
- Removed assets/final/ (legacy, superseded by workspaces)
- Updated hot-reload paths in main.cc
- Updated CMake references for test_demo and validation
- Updated gen_spectrograms.sh paths
handoff(Claude): Workspace reorganization complete
|
|
Moved main.cc, stub_main.cc, and test_demo.cc from src/ to src/app/
for better organization. Updated cmake/DemoExecutables.cmake paths.
handoff(Claude): App files reorganized into src/app/ directory
|
|
- Add --cnn-version <1|2> flag to select between CNN v1 and v2
- Implement beat_phase modulation for dynamic blend in both CNN effects
- Fix CNN v2 per-layer uniform buffer sharing (each layer needs own buffer)
- Fix CNN v2 y-axis orientation to match render pass convention
- Add Scene1Effect as base visual layer to test_demo timeline
- Reorganize CNN v2 shaders into cnn_v2/ subdirectory
- Update asset paths and documentation for new shader organization
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
|
|
Features:
- CPU load bar: Color-coded (green→yellow→red) effect density visualization
- Overlays under waveform to save space, always visible
- Constant load (1.0) per active effect, 0.1 beat resolution
- Add Effect button: Create new effects in selected sequence
- Delete buttons in properties panel for quick access
- Timeline favicon (green bars SVG)
Fixes:
- Handle drag no longer jumps on mousedown (offset tracking)
- Sequence name input accepts numbers (explicit inputmode)
- Start Time label corrected (beats, not seconds)
Updated timeline.seq with beat-based timing adjustments.
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
|
|
1. Loss printed at every epoch with \r (no scrolling)
2. Validation only on final epoch (not all checkpoints)
3. Process all input images (not just img_000.png)
Training output now shows live progress with single line update.
|
|
- Add QAT (quantization-aware training) notes
- Requires training with fake quantization
- Target: ~1.6 KB weights (vs 3.2 KB f16)
- Shader unpacking needs adaptation (4× u8 per u32)
|
|
- Add binary weight format (header + layer info + packed f16)
- New export_cnn_v2_weights.py for binary weight export
- Single cnn_v2_compute.wgsl shader with storage buffer
- Load weights in CNNv2Effect::load_weights()
- Create layer compute pipeline with 5 bindings
- Fast training config: 100 epochs, 3×3 kernels, 8→4→4 channels
Next: Complete bind group creation and multi-layer compute execution
|
|
Infrastructure for enhanced CNN post-processing with 7D feature input.
Phase 1: Shaders
- Static features compute (RGBD + UV + sin10_x + bias → 8×f16)
- Layer template (convolution skeleton, packing/unpacking)
- 3 mip level support for multi-scale features
Phase 2: C++ Effect
- CNNv2Effect class (multi-pass architecture)
- Texture management (static features, layer buffers)
- Build integration (CMakeLists, assets, tests)
Phase 3: Training Pipeline
- train_cnn_v2.py: PyTorch model with static feature concatenation
- export_cnn_v2_shader.py: f32→f16 quantization, WGSL generation
- Configurable architecture (kernels, channels)
Phase 4: Validation
- validate_cnn_v2.sh: End-to-end pipeline
- Checkpoint → shaders → build → test images
Tests: 36/36 passing
Next: Complete render pipeline implementation (bind groups, multi-pass)
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
|
|
Fixes:
- Sequence dragging with scroll offset
- Double-click collapse/expand (DOM recreation issue)
- Collapsed sequence dragging (removed stopPropagation)
Features:
- Quantize dropdown (Off, 1/32→1 beat) replaces snap-to-beat checkbox
- Works in both beat and second display modes
- Hotkeys: 0=Off, 1=1beat, 2=1/2, 3=1/4, 4=1/8, 5=1/16, 6=1/32
- Separate "Show Beats" toggle for display vs snap behavior
Technical:
- Track dragMoved state to avoid unnecessary DOM recreation
- Preserve dblclick detection by deferring renderTimeline()
- Quantization applies to sequences and effects uniformly
handoff(Claude): timeline editor quantize + drag fixes complete
|
|
Remove test snippets (a/b) that belong in test workspace only.
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
|
|
- Fixed particle_spray_compute.wgsl (uniforms.beat → uniforms.beat_phase)
- Fixed ellipse.wgsl (uniforms.beat → uniforms.beat_phase)
- Applied to all workspace and asset directories
Resolves shader compilation error on demo64k startup.
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
|
|
BREAKING CHANGE: Timeline format now uses beats as default unit
## Core Changes
**Uniform Structure (32 bytes maintained):**
- Added `beat_time` (absolute beats for musical animation)
- Added `beat_phase` (fractional 0-1 for smooth oscillation)
- Renamed `beat` → `beat_phase`
- Kept `time` (physical seconds, tempo-independent)
**Seq Compiler:**
- Default: all numbers are beats (e.g., `5`, `16.5`)
- Explicit seconds: `2.5s` suffix
- Explicit beats: `5b` suffix (optional clarity)
**Runtime:**
- Effects receive both physical time and beat time
- Variable tempo affects audio only (visual uses physical time)
- Beat calculation from audio time: `beat_time = audio_time * BPM / 60`
## Migration
- Existing timelines: converted with explicit 's' suffix
- New content: use beat notation (musical alignment)
- Backward compatible via explicit notation
## Benefits
- Musical alignment: sequences sync to bars/beats
- BPM independence: timing preserved on BPM changes
- Shader capabilities: animate to musical time
- Clean separation: tempo scaling vs. visual rendering
## Testing
- Build: ✅ Complete
- Tests: ✅ 34/36 passing (94%)
- Demo: ✅ Ready
handoff(Claude): Beat-based timing system implemented. Variable tempo
only affects audio sample triggering. Visual effects use physical_time
(constant) and beat_time (musical). Shaders can now animate to beats.
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
|
|
+misc
|