| Age | Commit message (Collapse) | Author |
|
Moved main.cc, stub_main.cc, and test_demo.cc from src/ to src/app/
for better organization. Updated cmake/DemoExecutables.cmake paths.
handoff(Claude): App files reorganized into src/app/ directory
|
|
- Add --cnn-version <1|2> flag to select between CNN v1 and v2
- Implement beat_phase modulation for dynamic blend in both CNN effects
- Fix CNN v2 per-layer uniform buffer sharing (each layer needs own buffer)
- Fix CNN v2 y-axis orientation to match render pass convention
- Add Scene1Effect as base visual layer to test_demo timeline
- Reorganize CNN v2 shaders into cnn_v2/ subdirectory
- Update asset paths and documentation for new shader organization
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
|
|
|
|
|
|
FATAL_CHECK triggers when condition is TRUE (error case).
Inverted equality checks: magic/version == correct_value
would fatal when weights were valid.
Changed to != checks to fail on invalid data.
|
|
Use container's getBoundingClientRect instead of timeline's. Timeline can
scroll off-screen with negative left values. Container stays visible and
provides reliable viewport coordinates. Fixes double-click seek and drag.
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
|
|
Read actual canvas offset from style.left instead of assuming scrollLeft.
Canvas is positioned with negative left offset, so we subtract it to get
correct beat position.
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
|
|
Use helper functions (beatsToTime, timeToBeats) consistently in click
handlers. Fixes red cursor jumping to wrong position during seek.
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
|
|
Factorize common patterns: POST_PROCESS_EFFECTS constant, helper functions
(beatsToTime, timeToBeats, beatRange, detectConflicts). Reduce verbosity
with modern JS features (nullish coalescing, optional chaining, Object.assign).
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
|
|
Detects same-priority post-process effects (within and cross-sequence).
CPU load bar (10px, pastel colors) shows conflicts. Matches seq_compiler.cc
logic: warns when 2+ post-process effects share priority, regardless of
time overlap.
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
|
|
Features:
- CPU load bar: Color-coded (green→yellow→red) effect density visualization
- Overlays under waveform to save space, always visible
- Constant load (1.0) per active effect, 0.1 beat resolution
- Add Effect button: Create new effects in selected sequence
- Delete buttons in properties panel for quick access
- Timeline favicon (green bars SVG)
Fixes:
- Handle drag no longer jumps on mousedown (offset tracking)
- Sequence name input accepts numbers (explicit inputmode)
- Start Time label corrected (beats, not seconds)
Updated timeline.seq with beat-based timing adjustments.
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
|
|
Updated:
- HOWTO.md: Complete pipeline, storage buffer, --validate mode
- TODO.md: Mark CNN v2 complete, add QAT TODO
- PROJECT_CONTEXT.md: Update Effects status
- CNN_V2.md: Mark complete, add storage buffer notes
- train_cnn_v2_full.sh: Add --help message
All documentation now reflects:
- Storage buffer architecture
- Binary weight format
- Live training progress
- Validation-only mode
- 8-bit quantization TODO
|
|
Usage:
./train_cnn_v2_full.sh --validate [checkpoint.pth]
Skips training and weight export, uses existing weights.
Validates all input images with latest (or specified) checkpoint.
Example:
./train_cnn_v2_full.sh --validate checkpoints/checkpoint_epoch_50.pth
|
|
1. Loss printed at every epoch with \r (no scrolling)
2. Validation only on final epoch (not all checkpoints)
3. Process all input images (not just img_000.png)
Training output now shows live progress with single line update.
|
|
- Add QAT (quantization-aware training) notes
- Requires training with fake quantization
- Target: ~1.6 KB weights (vs 3.2 KB f16)
- Shader unpacking needs adaptation (4× u8 per u32)
|
|
- Export weights from epoch 70 checkpoint (3.2 KB binary)
- Disable shader template generation (use manual cnn_v2_compute.wgsl)
- Build successful with real weights
- Ready for integration testing
Storage buffer architecture complete:
- Dynamic layer count support
- ~0.3ms overhead vs constants (negligible)
- Single shader, flexible configuration
- Binary format: header + layer info + f16 weights
|
|
- Create bind groups per layer with ping-pong buffers
- Update layer params uniform per dispatch
- Execute all layers in sequence with proper input/output swapping
- Ready for weight export and end-to-end testing
|
|
- Add binary weight format (header + layer info + packed f16)
- New export_cnn_v2_weights.py for binary weight export
- Single cnn_v2_compute.wgsl shader with storage buffer
- Load weights in CNNv2Effect::load_weights()
- Create layer compute pipeline with 5 bindings
- Fast training config: 100 epochs, 3×3 kernels, 8→4→4 channels
Next: Complete bind group creation and multi-layer compute execution
|
|
Added note for future enhancement: mix salient + random samples.
Rationale:
- Salient point detection focuses on edges/corners
- Random samples improve generalization across entire image
- Prevents overfitting to only high-gradient regions
Proposed implementation:
- Default: 90% salient points, 10% random samples
- Configurable: --random-sample-percent parameter
- Example: 64 patches = 58 salient + 6 random
Location: train_cnn_v2.py
- TODO in _detect_salient_points() method
- TODO in argument parser
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
|
|
Salient point detection on original images with patch extraction.
Changes:
- Added PatchDataset class (harris/fast/shi-tomasi/gradient detectors)
- Detects salient points on ORIGINAL images (no resize)
- Extracts 32×32 patches around salient points
- Default: 64 patches/image, harris detector
- Batch size: 16 (512 patches per batch)
Training modes:
1. Patch-based (default): --patch-size 32 --patches-per-image 64 --detector harris
2. Full-image (option): --full-image --image-size 256
Benefits:
- Focuses training on interesting regions
- Handles variable image sizes naturally
- Matches CNN v1 workflow
- Better convergence with limited data (8 images → 512 patches)
Script updated:
- train_cnn_v2_full.sh: Patch-based by default
- Configuration exposed for easy switching
Example:
./scripts/train_cnn_v2_full.sh # Patch-based
# Edit script: uncomment FULL_IMAGE for resize mode
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
|
|
Training script now resizes all images to fixed size before batching.
Issue: RuntimeError when batching variable-sized images
- Images had different dimensions (376x626 vs 344x361)
- PyTorch DataLoader requires uniform tensor sizes for batching
Solution:
- Add --image-size parameter (default: 256)
- Resize all images to target_size using LANCZOS interpolation
- Preserves aspect ratio independent training
Changes:
- train_cnn_v2.py: ImagePairDataset now resizes to fixed dimensions
- train_cnn_v2_full.sh: Added IMAGE_SIZE=256 configuration
Tested: 8 image pairs, variable sizes → uniform 256×256 batches
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
|
|
Updated project status to reflect CNN v2 implementation completion.
Changes:
- TODO.md: Marked Task #85 as [READY FOR TRAINING]
- All 5 phases complete
- Infrastructure ready for model training and integration
- PROJECT_CONTEXT.md: Updated Effects section
- Added CNN v2 parametric static features reference
- Added CNN_V2.md to technical documentation list
Status summary:
✅ Phase 1: Static features shader (8×f16 packed, 3 mip levels)
✅ Phase 2: C++ effect class (CNNv2Effect)
✅ Phase 3: Training pipeline (train_cnn_v2.py, export)
✅ Phase 4: Validation tooling (validate_cnn_v2.sh)
✅ Phase 5: Render pipeline (compute passes, bind groups)
Next steps: Train model, generate layer shaders, demo integration
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
|
|
Complete multi-pass compute execution for CNNv2Effect.
Implementation:
- Layer texture creation (ping-pong buffers for intermediate results)
- Static features compute pipeline with bind group layout
- Bind group creation with 5 bindings (input mips + depth + output)
- compute() override for multi-pass execution
- Static features pass with proper workgroup dispatch
Architecture:
- Static features: 8×f16 packed as 4×u32 (RGBD + UV + sin + bias)
- Layer buffers: 2×RGBA32Uint textures (8 channels f16 each)
- Input mips: 3 levels (0, 1, 2) for multi-scale features
- Workgroup size: 8×8 threads
Status:
- Static features compute pass functional
- Layer pipeline infrastructure ready
- All 36/36 tests passing
Next: Layer shader integration, multi-layer execution
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
|
|
Infrastructure for enhanced CNN post-processing with 7D feature input.
Phase 1: Shaders
- Static features compute (RGBD + UV + sin10_x + bias → 8×f16)
- Layer template (convolution skeleton, packing/unpacking)
- 3 mip level support for multi-scale features
Phase 2: C++ Effect
- CNNv2Effect class (multi-pass architecture)
- Texture management (static features, layer buffers)
- Build integration (CMakeLists, assets, tests)
Phase 3: Training Pipeline
- train_cnn_v2.py: PyTorch model with static feature concatenation
- export_cnn_v2_shader.py: f32→f16 quantization, WGSL generation
- Configurable architecture (kernels, channels)
Phase 4: Validation
- validate_cnn_v2.sh: End-to-end pipeline
- Checkpoint → shaders → build → test images
Tests: 36/36 passing
Next: Complete render pipeline implementation (bind groups, multi-pass)
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
|
|
Design document for CNN v2 with enhanced feature inputs:
- 7D static features: RGBD + UV + sin encoding + bias
- Per-layer configurable kernels (1×1, 3×3, 5×5)
- Float16 weight storage (~6.4 KB vs 3.2 KB)
- Multi-pass architecture with static feature compute
Implementation plan:
1. Static features compute shader (RGBD + UV + sin + bias)
2. C++ effect class (CNNv2Effect)
3. Training pipeline (train_cnn_v2.py, export_cnn_v2_shader.py)
4. Validation tooling (validate_cnn_v2.sh)
Files:
- doc/CNN_V2.md: Complete technical design (architecture, training, export)
- scripts/validate_cnn_v2.sh: End-to-end validation script
- TODO.md: Add CNN v2 as Priority 2 task
- doc/HOWTO.md: Add CNN v2 validation usage
Target: <10 KB for 64k demo constraint
handoff(Claude): CNN v2 design ready for implementation
|
|
Fixes:
- Sequence dragging with scroll offset
- Double-click collapse/expand (DOM recreation issue)
- Collapsed sequence dragging (removed stopPropagation)
Features:
- Quantize dropdown (Off, 1/32→1 beat) replaces snap-to-beat checkbox
- Works in both beat and second display modes
- Hotkeys: 0=Off, 1=1beat, 2=1/2, 3=1/4, 4=1/8, 5=1/16, 6=1/32
- Separate "Show Beats" toggle for display vs snap behavior
Technical:
- Track dragMoved state to avoid unnecessary DOM recreation
- Preserve dblclick detection by deferring renderTimeline()
- Quantization applies to sequences and effects uniformly
handoff(Claude): timeline editor quantize + drag fixes complete
|
|
Remove test snippets (a/b) that belong in test workspace only.
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
|
|
|
|
|
|
§
|
|
Reduced file size from 1899 to 823 lines (57% reduction) while improving
maintainability and user experience.
CSS improvements:
- Added CSS variables for colors, spacing, and border radius
- Consolidated duplicate button/input/label styles
- Added missing .zoom-controls class definition
- Reduced CSS from ~510 to ~100 lines
JavaScript refactoring:
- Centralized global state into single `state` object
- Created `dom` object to cache all element references
- Removed all inline event handlers (onclick, oninput)
- Replaced with proper addEventListener pattern
- Fixed missing playbackControls reference (bug fix)
- Reduced JS from ~1320 to ~660 lines
UX improvements:
- Playback indicators (red bars) now always visible, start at 0s
- During playback, highlight current sequence green (no expand/collapse reflow)
- Smooth scrolling follows playback indicator (10% interpolation at 40% viewport)
- Moved "Show Beats" checkbox inline with BPM controls
- Fixed playback controls layout (time left of button, proper gap/alignment)
- Error messages now logged to console as well as UI
No functional regressions - all features work identically.
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
|
|
- Add red bar playback indicator on waveform (synced with timeline)
- Fix playback continuation after double-click seek (async/await)
- Improve stopPlayback() to preserve jump positions
- Add error handling to startPlayback()
- Update waveform click-to-seek to match double-click behavior
- Sync waveform indicator scroll with timeline
- Display time in both seconds and beats on seek
- Update documentation with new features
handoff(Claude): Timeline editor now has dual playback indicators and seamless seeking.
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
|
|
- Update test status: 34/36 (94%) → 36/36 (100%)
- Add timeline editor to PROJECT_CONTEXT.md
- Fix broken BEAT_TIMING_SUMMARY.md references → doc/BEAT_TIMING.md
- Consolidate duplicate entries in README.md
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
|
|
|
|
- Add render/scene_query_mode to known placeholders in VerifyIncludes
- Remove warning for duplicate auxiliary texture registration (valid for multiple CNNEffect stacks)
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
|
|
- Audio playback controls (play/pause, spacebar shortcut)
- Red playback indicator with auto-scroll (middle third viewport)
- Auto-expand active sequence during playback, collapse previous
- Click waveform to seek
- Sticky header: waveform + timeline ticks stay at top
- Sequences confined to separate scrollable container below header
- Document known bugs: zoom sync, positioning, reflow issues
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
|
|
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
|
|
Timeline editor now stores all times internally as beats (not seconds),
aligning with the project's beat-based timing system. Added BPM slider
for tempo control. Serializes to beats (default format) and displays
beats primarily with seconds in tooltips.
Changes:
- parseTime() returns beats (converts 's' suffix to beats)
- serializeSeqFile() outputs beats (bare numbers)
- Timeline markers show beats (4-beat/bar increments)
- BPM slider (60-200) for tempo editing
- Snap-to-beat rounds to nearest beat
- Audio waveform aligned to beats
- showBeats enabled by default
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
|
|
- wav_dump_backend: Fix data_size double-counting channels (line 126)
- test_wav_dump: Assert on data_size validation instead of warning
- main: Add SIGINT/SIGTERM handlers to finalize WAV on Ctrl+C
- Guard signal handler code with DEMO_HEADLESS ifdef
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
|
|
Updated all affected documentation files:
- UNIFORM_BUFFER_GUIDELINES.md: New CommonUniforms example
- ARCHITECTURE.md: Beat-based timing section
- EFFECT_WORKFLOW.md: Available uniforms reference
- CONTRIBUTING.md: Updated uniform buffer checklist
handoff(Claude): Beat-based timing system fully implemented and documented.
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
|
|
The custom render() signature didn't match PostProcessEffect::render(),
so it was never called. The base class method was used instead, which
didn't update uniforms with the peak value.
Fixed by:
- Using correct override signature: render(pass, uniforms)
- Calling PostProcessEffect::render() to handle standard rendering
- Removed unused custom parameters (time, beat, peak_value, aspect_ratio)
- Added override keyword to update_bind_group()
Peak meter bar now displays correctly with audio_intensity.
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
|
|
Instead of duplicating the uniform structure definition, PeakMeterEffect
now uses ShaderComposer to include the common_uniforms snippet, ensuring
the struct definition always matches the canonical version.
Changes:
- Added shader_composer.h include
- Use ShaderComposer::Get().Compose() to prepend common_uniforms
- Changed 'Uniforms' → 'CommonUniforms' in shader
- Removed duplicate struct definition
This ensures consistency across all shaders and eliminates potential
drift between duplicate definitions.
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
|
|
The embedded shader in PeakMeterEffect was using the old uniform
structure with _pad0/_pad1 instead of the new beat_time/beat_phase
fields, causing the peak meter bar to not display correctly.
Updated to match CommonPostProcessUniforms structure:
- Removed _pad0, _pad1
- Added beat_time, beat_phase
- Moved _pad to end
Peak meter visualization now works correctly in test_demo.
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
|
|
- Added comprehensive doc/BEAT_TIMING.md user guide
- Updated BEAT_TIMING_SUMMARY.md with verification results
- Updated PROJECT_CONTEXT.md to highlight timing system
- Updated README.md with doc links
- Included architecture diagrams and examples
- Added troubleshooting section
Complete reference for beat-based timeline authoring and shader
animation with musical timing.
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
|
|
- Fixed particle_spray_compute.wgsl (uniforms.beat → uniforms.beat_phase)
- Fixed ellipse.wgsl (uniforms.beat → uniforms.beat_phase)
- Applied to all workspace and asset directories
Resolves shader compilation error on demo64k startup.
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
|
|
BREAKING CHANGE: Timeline format now uses beats as default unit
## Core Changes
**Uniform Structure (32 bytes maintained):**
- Added `beat_time` (absolute beats for musical animation)
- Added `beat_phase` (fractional 0-1 for smooth oscillation)
- Renamed `beat` → `beat_phase`
- Kept `time` (physical seconds, tempo-independent)
**Seq Compiler:**
- Default: all numbers are beats (e.g., `5`, `16.5`)
- Explicit seconds: `2.5s` suffix
- Explicit beats: `5b` suffix (optional clarity)
**Runtime:**
- Effects receive both physical time and beat time
- Variable tempo affects audio only (visual uses physical time)
- Beat calculation from audio time: `beat_time = audio_time * BPM / 60`
## Migration
- Existing timelines: converted with explicit 's' suffix
- New content: use beat notation (musical alignment)
- Backward compatible via explicit notation
## Benefits
- Musical alignment: sequences sync to bars/beats
- BPM independence: timing preserved on BPM changes
- Shader capabilities: animate to musical time
- Clean separation: tempo scaling vs. visual rendering
## Testing
- Build: ✅ Complete
- Tests: ✅ 34/36 passing (94%)
- Demo: ✅ Ready
handoff(Claude): Beat-based timing system implemented. Variable tempo
only affects audio sample triggering. Visual effects use physical_time
(constant) and beat_time (musical). Shaders can now animate to beats.
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
|
|
+misc
|
|
- Sequences start collapsed by default for better overview
- Move controls to header for vertical space savings
- Remove "Pixels per second" label (redundant with zoom %)
- Move properties panel to bottom left
- Update collapse arrows for vertical layout (▼/▲)
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
|
|
- Sticky time markers stay visible when scrolling
- Faint vertical grid lines aligned with ticks for better alignment
- Collapsible sequences via double-click (35px collapsed state)
- Updated all references from demo.seq to timeline.seq
- Consolidated and tightened documentation
- Fixed _collapsed initialization bug
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
|
|
Comprehensive analysis of single-pass CNN shader architecture:
- Full flatten (3 layers): 544 bytes/thread register pressure - NOT recommended
- Partial flatten (layers 1+2): 288 bytes/thread - marginal benefit
- Current multi-pass: Optimal for GPU occupancy and maintainability
Recommendation: Keep current 3-pass architecture.
Alternative size optimizations: weight quantization, kernel reduction.
handoff(Claude): CNN flatten analysis documented
|