summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
-rw-r--r--BEAT_TIMING_SUMMARY.md251
-rw-r--r--PROJECT_CONTEXT.md8
-rw-r--r--README.md3
-rw-r--r--TODO.md21
-rw-r--r--cur/layer_0.pngbin6786 -> 0 bytes
-rw-r--r--cur/layer_1.pngbin6786 -> 0 bytes
-rw-r--r--doc/CNN_V2.md671
-rw-r--r--doc/HOWTO.md23
-rw-r--r--go.sh10
-rw-r--r--output/layer_0.pngbin199123 -> 0 bytes
-rw-r--r--output/layer_1.pngbin29714 -> 0 bytes
-rw-r--r--output/ref/layer_0.pngbin178798 -> 0 bytes
-rw-r--r--output/ref/layer_1.pngbin92025 -> 0 bytes
-rw-r--r--output/toto.pngbin10071 -> 0 bytes
-rw-r--r--output/toto0.pngbin22960 -> 0 bytes
-rwxr-xr-xscripts/validate_cnn_v2.sh198
-rw-r--r--test/toto.pngbin27738 -> 0 bytes
-rw-r--r--test_passthrough.wgsl10
-rw-r--r--tmp/layer_0.pngbin239570 -> 0 bytes
-rw-r--r--tmp/layer_1.pngbin71523 -> 0 bytes
-rw-r--r--tools/timeline_editor/README.md46
-rw-r--r--tools/timeline_editor/ROADMAP.md32
-rw-r--r--tools/timeline_editor/index.html1974
-rw-r--r--toto.pngbin24933 -> 0 bytes
-rwxr-xr-xtraining/debug.sh (renamed from training/debug/debug.sh)0
-rw-r--r--training/debug/cur/layer_0.pngbin406194 -> 0 bytes
-rw-r--r--training/debug/cur/layer_1.pngbin238358 -> 0 bytes
-rw-r--r--training/debug/cur/toto.pngbin90164 -> 0 bytes
-rw-r--r--training/debug/ref/layer_0.pngbin356038 -> 0 bytes
-rw-r--r--training/debug/ref/layer_1.pngbin222247 -> 0 bytes
-rw-r--r--training/debug/ref/toto.pngbin107009 -> 0 bytes
-rw-r--r--training/debug/training/checkpoints/checkpoint_epoch_10.pthbin6395 -> 0 bytes
-rw-r--r--training/debug/training/checkpoints/checkpoint_epoch_100.pthbin6417 -> 0 bytes
-rw-r--r--training/debug/training/checkpoints/checkpoint_epoch_50.pthbin6395 -> 0 bytes
-rw-r--r--training/output/ground_truth.pngbin8547 -> 0 bytes
-rw-r--r--training/output/img_000.pngbin16088 -> 0 bytes
-rw-r--r--training/output/img_001.pngbin7727 -> 0 bytes
-rw-r--r--training/output/img_002.pngbin7366 -> 0 bytes
-rw-r--r--training/output/img_003.pngbin7395 -> 0 bytes
-rw-r--r--training/output/img_004.pngbin13073 -> 0 bytes
-rw-r--r--training/output/img_005.pngbin5119 -> 0 bytes
-rw-r--r--training/output/img_006.pngbin28013 -> 0 bytes
-rw-r--r--training/output/img_007.pngbin21369 -> 0 bytes
-rw-r--r--training/output/patch_final.pngbin105 -> 0 bytes
-rw-r--r--training/output/patch_gt.pngbin138 -> 0 bytes
-rw-r--r--training/output/patch_tool.pngbin105 -> 0 bytes
-rw-r--r--training/output/patch_tool_fixed.pngbin105 -> 0 bytes
-rw-r--r--training/output/test_debug.pngbin105 -> 0 bytes
-rw-r--r--training/output/test_sync.pngbin105 -> 0 bytes
-rw-r--r--training/output/tool_output.pngbin8030 -> 0 bytes
-rw-r--r--training/toto.pngbin103619 -> 0 bytes
-rw-r--r--workspaces/main/shaders/test_snippet_a.wgsl4
-rw-r--r--workspaces/main/shaders/test_snippet_b.wgsl4
-rw-r--r--workspaces/main/timeline.seq.backup105
-rw-r--r--workspaces/test/timeline.seq.backup8
55 files changed, 1504 insertions, 1864 deletions
diff --git a/BEAT_TIMING_SUMMARY.md b/BEAT_TIMING_SUMMARY.md
deleted file mode 100644
index e593380..0000000
--- a/BEAT_TIMING_SUMMARY.md
+++ /dev/null
@@ -1,251 +0,0 @@
-# Beat-Based Timing System
-
-## Summary
-
-**Timeline sequences now use musical beats as the primary time unit**, ensuring visual effects stay synchronized to music structure regardless of BPM changes. Variable tempo only affects audio sample triggering—visual effects run at constant physical time with optional beat-synchronized animation.
-
-**Key Change:** `CommonPostProcessUniforms` now provides both `time` (physical seconds) and `beat_time` (absolute beats) + `beat_phase` (fractional 0-1) for flexible animation.
-
----
-
-## Changes Made
-
-### 1. **Documentation Updated**
-- `doc/SEQUENCE.md`: Beat-based format as primary, updated runtime parameters
-- `tools/timeline_editor/README.md`: Beat notation as default
-
-### 2. **Uniform Structure Enhanced**
-```cpp
-struct CommonPostProcessUniforms {
- vec2 resolution; // Screen dimensions
- float aspect_ratio; // Width/height ratio
- float time; // Physical seconds (unaffected by tempo)
- float beat_time; // NEW: Absolute beats (musical time)
- float beat_phase; // NEW: Fractional beat 0-1 (was "beat")
- float audio_intensity; // Audio peak
- float _pad; // Alignment
-}; // 32 bytes (unchanged size)
-```
-
-### 3. **Shader Updates**
-- All `common_uniforms.wgsl` files updated with new field names
-- Effects can now use:
- - `time` for physics-based animation (constant speed)
- - `beat_time` for musical animation (bars/beats)
- - `beat_phase` for smooth per-beat oscillation
-
-### 4. **Seq Compiler**
-- `tools/seq_compiler.cc`: Beat notation as default parser behavior
-- Format: `5` = beats, `2.5s` = explicit seconds
-- BPM-based conversion at compile time (beats → seconds)
-
-### 5. **Timeline Files Converted**
-- `workspaces/main/timeline.seq`: Added 's' suffix to preserve timing
-- `workspaces/test/timeline.seq`: Added 's' suffix to preserve timing
-- Existing demos run unchanged with explicit seconds notation
-
-### 6. **Runtime Updates**
-- `main.cc`: Calculates `beat_time` and `beat_phase` from audio time
-- `gpu.cc`: Passes both physical time and beat time to effects
-- `effect.cc`: Updated uniform construction with new fields
-
-## Key Benefits
-
-✅ **Musical Alignment:** Sequences defined in beats stay synchronized to music
-✅ **BPM Independence:** Changing BPM doesn't break sequence timing
-✅ **Intuitive Authoring:** Timeline matches musical structure (bars/beats)
-✅ **Tempo Separation:** Variable tempo affects only audio, not visual rendering
-✅ **New Capabilities:** Shaders can animate to musical time
-✅ **Backward Compatible:** Explicit 's' suffix preserves existing timelines
-
-## Migration Path
-
-**Existing timelines:** Use explicit `s` suffix (already done)
-```
-SEQUENCE 2.50s 0
- EFFECT + Flash 0.00s 1.00s
-```
-
-**New content:** Use beat notation (natural default)
-```
-# BPM 120
-SEQUENCE 0 0 "Intro" # Beat 0 = bar 1
- EFFECT + Flash 0 2 # Beats 0-2 (half bar)
- EFFECT + Fade 4 8 # Beats 4-8 (full bar)
-```
-
-## Verification
-
-**Build:** ✅ Complete (100%)
-```bash
-cmake --build build -j4
-```
-
-**Tests:** ✅ 34/36 passing (94%)
-```bash
-cd build && ctest
-```
-
-**Demo Run:** ✅ Verified
-```
-[GraphicsT=0.32, AudioT=0.13, Beat=0, Phase=0.26, Peak=1.00]
-[GraphicsT=0.84, AudioT=0.64, Beat=1, Phase=0.28, Peak=0.14]
-[GraphicsT=1.38, AudioT=1.15, Beat=2, Phase=0.30, Peak=0.92]
-```
-- Beat counting: ✅ Correct (0→1→2→3...)
-- Phase tracking: ✅ Correct (fractional 0.0-1.0)
-- Effect timing: ✅ Sequences start/end at correct times
-- Shader compilation: ✅ No errors
-
-**Commits:**
-- `89c4687` - feat: implement beat-based timing system
-- `641b5b6` - fix: update shader files to use beat_phase
-
----
-
-## Usage Examples
-
-### Timeline Authoring (Beat-Based)
-```seq
-# BPM 120
-SEQUENCE 0 0 "Intro (Bar 1)"
- EFFECT + Flash 0 2 # Beats 0-2 (half bar)
- EFFECT + Fade 2 4 # Beats 2-4 (second half)
-
-SEQUENCE 8 1 "Drop (Bar 3)"
- EFFECT + Heptagon 0 16 # Full 4 bars (16 beats)
- EFFECT + Particles 4 12 # Beats 4-12 (2 bars)
-```
-
-### Shader Animation (Musical Time)
-```wgsl
-// Pulse every 4 beats (one bar)
-let bar_pulse = sin(uniforms.beat_time * TAU / 4.0);
-
-// Smooth per-beat oscillation
-let beat_wave = sin(uniforms.beat_phase * TAU);
-
-// Physics-based (constant speed)
-let rotation = uniforms.time * TAU;
-```
-
-### Legacy Timelines (Explicit Seconds)
-```seq
-SEQUENCE 2.50s 0
- EFFECT + Flash 0.00s 1.00s # Preserved timing
-```
-
----
-
-## Architecture
-
-**Timing Separation:**
-```
-┌─────────────────┐
-│ Platform Clock │ (physical seconds)
-└────────┬────────┘
- │
- ┌────┴─────┬──────────────┐
- ▼ ▼ ▼
-Physical Audio Time Music Time
- Time (playback) (tempo-scaled)
- │ │ │
- │ └──────┬───────┘
- │ ▼
- │ Beat Calculation
- │ (BPM conversion)
- │ │
- └────────┬────────┘
- ▼
- Visual Effects Rendering
- (time + beat_time + beat_phase)
-```
-
-**Key Insight:** Variable tempo changes `music_time` for audio triggering, but visual effects receive constant `time` (physical) and derived `beat_time` (from audio playback, not music_time).
-
----
-
-## Technical Details
-
-### Uniform Size Maintained
-```cpp
-// Before (32 bytes):
-struct { vec2 res; float _pad[2]; float aspect, time, beat, intensity; }
-
-// After (32 bytes):
-struct { vec2 res; float aspect, time, beat_time, beat_phase, intensity, _pad; }
-```
-
-### Beat Calculation
-```cpp
-// main.cc
-const float absolute_beat_time = current_audio_time * g_tracker_score.bpm / 60.0f;
-const float beat_phase = fmodf(absolute_beat_time, 1.0f);
-```
-
-### Seq Compiler Logic
-```cpp
-// Default: beats → seconds
-float beat = std::stof(value);
-float time = beat * 60.0f / bpm;
-
-// Explicit seconds: pass through
-if (value.back() == 's') return seconds;
-```
-
----
-
-## Migration Guide
-
-**For New Content:** Use beat notation (recommended)
-```seq
-# BPM 140
-SEQUENCE 0 0 "Intro"
- EFFECT + Flash 0 4 # 4 beats = 1.71s @ 140 BPM
-```
-
-**For Existing Content:** Already migrated with 's' suffix
-```seq
-SEQUENCE 2.50s 0 # Preserved exact timing
- EFFECT + Flash 0.00s 1.00s
-```
-
-**For Shader Effects:**
-- Use `uniforms.beat_phase` (not `uniforms.beat`)
-- Use `uniforms.beat_time` for bar-based animation
-- Use `uniforms.time` for constant-speed animation
-
----
-
-## Files Modified
-
-**Core System:**
-- `src/gpu/effects/post_process_helper.h` - Uniform structure
-- `src/gpu/effect.{h,cc}` - Effect rendering signatures
-- `src/gpu/gpu.{h,cc}` - GPU draw interface
-- `src/main.cc`, `src/test_demo.cc` - Beat calculation
-
-**Shaders:**
-- `workspaces/{main,test}/shaders/common_uniforms.wgsl`
-- `assets/{common,final}/shaders/common_uniforms.wgsl`
-- All effect shaders using beat: `particle_spray_compute.wgsl`, `ellipse.wgsl`
-
-**Timeline Compiler:**
-- `tools/seq_compiler.cc` - Beat-as-default parser
-
-**Timelines:**
-- `workspaces/main/timeline.seq` - Explicit 's' suffix
-- `workspaces/test/timeline.seq` - Explicit 's' suffix
-
-**Documentation:**
-- `doc/SEQUENCE.md` - Beat notation format
-- `tools/timeline_editor/README.md` - Editor usage
-
----
-
-## Future Enhancements
-
-1. **Beat-Synced Effects:** Create effects that pulse/animate to bars
-2. **Timeline Conversion:** Tool to convert explicit seconds → beats
-3. **Editor Support:** Timeline editor beat grid visualization
-4. **Shader Helpers:** WGSL functions for common beat patterns
diff --git a/PROJECT_CONTEXT.md b/PROJECT_CONTEXT.md
index e57763e..fb6f931 100644
--- a/PROJECT_CONTEXT.md
+++ b/PROJECT_CONTEXT.md
@@ -31,15 +31,15 @@
## Current Status
-- **Timing System:** **Beat-based timelines** for musical synchronization. Sequences defined in beats, converted to seconds at runtime. Effects receive both physical time (constant) and beat time (musical). Variable tempo affects audio only. See `BEAT_TIMING_SUMMARY.md`.
+- **Timing System:** **Beat-based timelines** for musical synchronization. Sequences defined in beats, converted to seconds at runtime. Effects receive both physical time (constant) and beat time (musical). Variable tempo affects audio only. See `doc/BEAT_TIMING.md`.
- **Workspace system:** Multi-workspace support. Easy switching with `-DDEMO_WORKSPACE=<name>`. Shared common assets.
- **Audio:** Sample-accurate sync. Zero heap allocations per frame. Variable tempo. Comprehensive tests.
- **Shaders:** Parameterized effects (UniformHelper, .seq syntax). Beat-synchronized animation support (`beat_time`, `beat_phase`). Modular WGSL composition.
- **3D:** Hybrid SDF/rasterization with BVH. Binary scene loader. Blender pipeline.
- **Effects:** CNN post-processing foundation (3-layer architecture, modular snippets). CNNEffect validated in demo.
-- **Tools:** CNN test tool (readback works, output incorrect - under investigation). Texture readback utility functional.
+- **Tools:** CNN test tool (readback works, output incorrect - under investigation). Texture readback utility functional. Timeline editor (web-based, beat-aligned, audio playback).
- **Build:** Asset dependency tracking. Size measurement. Hot-reload (debug-only).
-- **Testing:** **34/36 passing (94%)**
+- **Testing:** **36/36 passing (100%)**
---
@@ -59,7 +59,7 @@ See `TODO.md` for current priorities and active tasks.
**Technical Reference:**
- Core: `ASSET_SYSTEM.md`, `SEQUENCE.md`, `TRACKER.md`, `3D.md`, `CNN_EFFECT.md`
- Formats: `SCENE_FORMAT.md`, `MASKING_SYSTEM.md`
-- Tools: `BUILD.md`, `WORKSPACE_SYSTEM.md`, `SIZE_MEASUREMENT.md`, `CNN_TEST_TOOL.md`
+- Tools: `BUILD.md`, `WORKSPACE_SYSTEM.md`, `SIZE_MEASUREMENT.md`, `CNN_TEST_TOOL.md`, `tools/timeline_editor/README.md`
**History:**
- `doc/COMPLETED.md` - Completed tasks archive
diff --git a/README.md b/README.md
index 2f74e54..a99e0c0 100644
--- a/README.md
+++ b/README.md
@@ -19,8 +19,7 @@ cmake --build build -j4
- **doc/EFFECT_WORKFLOW.md** - Step-by-step guide for adding visual effects
**Key Features:**
-- **BEAT_TIMING_SUMMARY.md** - Beat-based timing system (NEW)
-- **doc/BEAT_TIMING.md** - Timeline authoring guide
+- **doc/BEAT_TIMING.md** - Beat-based timing system and timeline authoring
- **doc/CONTRIBUTING.md** - Development guidelines and protocols
See `doc/` for detailed technical documentation.
diff --git a/TODO.md b/TODO.md
index d7d24bc..b0cf2bb 100644
--- a/TODO.md
+++ b/TODO.md
@@ -24,6 +24,27 @@ Self-contained workspaces for parallel demo development.
---
+## Priority 2: CNN v2 - Parametric Static Features (Task #85) [PLANNING]
+
+Enhanced CNN post-processing with multi-dimensional feature inputs.
+
+**Design:** `doc/CNN_V2.md`
+
+**Implementation phases:**
+1. Static features compute shader (RGBD + UV + sin encoding + bias)
+2. C++ effect class (multi-pass layer execution)
+3. Training pipeline (PyTorch f32 → f16 export)
+4. Validation tooling (end-to-end checkpoint testing)
+
+**Key improvements over v1:**
+- 7D static feature input (vs 4D RGB)
+- Per-layer configurable kernels (1×1, 3×3, 5×5)
+- Float16 weight storage (~6.4 KB vs 3.2 KB)
+
+**Target:** <10 KB for 64k demo constraint
+
+---
+
## Priority 3: 3D System Enhancements (Task #18)
Pipeline for importing complex 3D scenes to replace hardcoded geometry.
diff --git a/cur/layer_0.png b/cur/layer_0.png
deleted file mode 100644
index 46a0065..0000000
--- a/cur/layer_0.png
+++ /dev/null
Binary files differ
diff --git a/cur/layer_1.png b/cur/layer_1.png
deleted file mode 100644
index 46a0065..0000000
--- a/cur/layer_1.png
+++ /dev/null
Binary files differ
diff --git a/doc/CNN_V2.md b/doc/CNN_V2.md
new file mode 100644
index 0000000..b3b6587
--- /dev/null
+++ b/doc/CNN_V2.md
@@ -0,0 +1,671 @@
+# CNN v2: Parametric Static Features
+
+**Technical Design Document**
+
+---
+
+## Overview
+
+CNN v2 extends the original CNN post-processing effect with parametric static features, enabling richer spatial and frequency-domain inputs for improved visual quality.
+
+**Key improvements over v1:**
+- 7D static feature input (vs 4D RGB)
+- Multi-frequency position encoding (NeRF-style)
+- Per-layer configurable kernel sizes (1×1, 3×3, 5×5)
+- Variable channel counts per layer
+- Float16 weight storage (GPU-optimized)
+- Bias integrated as static feature dimension
+
+**Status:** Design complete, ready for implementation
+
+---
+
+## Architecture
+
+### Pipeline Overview
+
+```
+Input RGBD → Static Features Compute → CNN Layers → Output RGBA
+ └─ computed once/frame ─┘ └─ multi-pass ─┘
+```
+
+**Static Features Texture:**
+- Name: `static_features`
+- Format: `texture_storage_2d<rgba32uint, write>` (4×u32)
+- Data: 8 float16 values packed via `pack2x16float()`
+- Computed once per frame, read by all CNN layers
+- Lifetime: Entire frame (all CNN layer passes)
+
+**CNN Layers:**
+- Input Layer: 7D static features → C₀ channels
+- Inner Layers: (7D + Cᵢ₋₁) → Cᵢ channels
+- Output Layer: (7D + Cₙ) → 4D RGBA
+- Storage: `texture_storage_2d<rgba32uint>` (8×f16 per texel recommended)
+
+---
+
+## Static Features (7D + 1 bias)
+
+### Feature Layout
+
+**8 float16 values per pixel:**
+
+```wgsl
+// Slot 0-3: RGBD (core pixel data)
+let r = rgba.r; // Red channel
+let g = rgba.g; // Green channel
+let b = rgba.b; // Blue channel
+let d = depth; // Depth value
+
+// Slot 4-5: UV coordinates (normalized screen space)
+let uv_x = coord.x / resolution.x; // Horizontal position [0,1]
+let uv_y = coord.y / resolution.y; // Vertical position [0,1]
+
+// Slot 6: Multi-frequency position encoding
+let sin10_x = sin(10.0 * uv_x); // Periodic feature (frequency=10)
+
+// Slot 7: Bias dimension (always 1.0)
+let bias = 1.0; // Learned bias per output channel
+
+// Packed storage: [R, G, B, D, uv.x, uv.y, sin(10*uv.x), 1.0]
+```
+
+### Feature Rationale
+
+| Feature | Dimension | Purpose | Priority |
+|---------|-----------|---------|----------|
+| RGBD | 4D | Core pixel information | Essential |
+| UV coords | 2D | Spatial position awareness | Essential |
+| sin(10\*uv.x) | 1D | Periodic position encoding | Medium |
+| Bias | 1D | Learned bias (standard NN) | Essential |
+
+**Why bias as static feature:**
+- Simpler shader code (single weight array)
+- Standard NN formulation: y = Wx (x includes bias term)
+- Saves 56-112 bytes (no separate bias buffer)
+- 7 features sufficient for initial implementation
+
+### Future Feature Extensions
+
+**Option: Replace sin(10\*uv.x) with:**
+- `sin(20*uv.x)` - Higher frequency encoding
+- `gray_mip1` - Multi-scale luminance
+- `dx`, `dy` - Sobel gradients
+- `variance` - Local texture measure
+- `laplacian` - Edge detection
+
+**Option: uint8 packing (16+ features):**
+```wgsl
+// texture_storage_2d<rgba8unorm> stores 16 uint8 values
+// Trade precision for feature count
+// [R, G, B, D, uv.x, uv.y, sin10.x, sin10.y,
+// sin20.x, sin20.y, dx, dy, gray_mip1, gray_mip2, var, bias]
+```
+Requires quantization-aware training.
+
+---
+
+## Layer Structure
+
+### Example 3-Layer Network
+
+```
+Input: 7D static → 16 channels (1×1 kernel, pointwise)
+Layer1: (7+16)D → 8 channels (3×3 kernel, spatial)
+Layer2: (7+8)D → 4 channels (5×5 kernel, large receptive field)
+```
+
+### Weight Calculations
+
+**Per-layer weights:**
+```
+Input: 7 × 1 × 1 × 16 = 112 weights
+Layer1: (7+16) × 3 × 3 × 8 = 1656 weights
+Layer2: (7+8) × 5 × 5 × 4 = 1500 weights
+Total: 3268 weights
+```
+
+**Storage sizes:**
+- f32: 3268 × 4 = 13,072 bytes (~12.8 KB)
+- f16: 3268 × 2 = 6,536 bytes (~6.4 KB) ✓ **recommended**
+
+**Comparison to v1:**
+- v1: ~800 weights (3.2 KB f32)
+- v2: ~3268 weights (6.4 KB f16)
+- **Growth: 2× size for parametric features**
+
+### Kernel Size Guidelines
+
+**1×1 kernel (pointwise):**
+- No spatial context, channel mixing only
+- Weights: `(7 + C_in) × C_out`
+- Use for: Input layer, bottleneck layers
+
+**3×3 kernel (standard conv):**
+- Local spatial context
+- Weights: `(7 + C_in) × 9 × C_out`
+- Use for: Most inner layers
+
+**5×5 kernel (large receptive field):**
+- Wide spatial context
+- Weights: `(7 + C_in) × 25 × C_out`
+- Use for: Output layer, detail enhancement
+
+### Channel Storage (8×f16 per texel)
+
+```wgsl
+@group(0) @binding(1) var layer_input: texture_2d<u32>;
+
+fn unpack_channels(coord: vec2<i32>) -> array<f32, 8> {
+ let packed = textureLoad(layer_input, coord, 0);
+ return array(
+ unpack2x16float(packed.x).x, unpack2x16float(packed.x).y,
+ unpack2x16float(packed.y).x, unpack2x16float(packed.y).y,
+ unpack2x16float(packed.z).x, unpack2x16float(packed.z).y,
+ unpack2x16float(packed.w).x, unpack2x16float(packed.w).y
+ );
+}
+
+fn pack_channels(values: array<f32, 8>) -> vec4<u32> {
+ return vec4(
+ pack2x16float(vec2(values[0], values[1])),
+ pack2x16float(vec2(values[2], values[3])),
+ pack2x16float(vec2(values[4], values[5])),
+ pack2x16float(vec2(values[6], values[7]))
+ );
+}
+```
+
+---
+
+## Training Workflow
+
+### Script: `training/train_cnn_v2.py`
+
+**Static Feature Extraction:**
+
+```python
+def compute_static_features(rgb, depth):
+ """Generate 7D static features + bias dimension."""
+ h, w = rgb.shape[:2]
+
+ # RGBD channels
+ r, g, b = rgb[..., 0], rgb[..., 1], rgb[..., 2]
+
+ # UV coordinates (normalized)
+ uv_x = np.linspace(0, 1, w)[None, :].repeat(h, axis=0)
+ uv_y = np.linspace(0, 1, h)[:, None].repeat(w, axis=1)
+
+ # Multi-frequency position encoding
+ sin10_x = np.sin(10.0 * uv_x)
+
+ # Bias dimension (always 1.0)
+ bias = np.ones_like(r)
+
+ # Stack: [R, G, B, D, uv.x, uv.y, sin10_x, bias]
+ return np.stack([r, g, b, depth, uv_x, uv_y, sin10_x, bias], axis=-1)
+```
+
+**Network Definition:**
+
+```python
+class CNNv2(nn.Module):
+ def __init__(self, kernels=[1,3,5], channels=[16,8,4]):
+ super().__init__()
+
+ # Input layer: 8D (7 features + bias) → channels[0]
+ self.layer0 = nn.Conv2d(8, channels[0], kernel_size=kernels[0],
+ padding=kernels[0]//2, bias=False)
+
+ # Inner layers: (7 features + bias + C_prev) → C_next
+ in_ch_1 = 8 + channels[0] # static + layer0 output
+ self.layer1 = nn.Conv2d(in_ch_1, channels[1], kernel_size=kernels[1],
+ padding=kernels[1]//2, bias=False)
+
+ # Output layer: (7 features + bias + C_last) → 4 (RGBA)
+ in_ch_2 = 8 + channels[1]
+ self.layer2 = nn.Conv2d(in_ch_2, 4, kernel_size=kernels[2],
+ padding=kernels[2]//2, bias=False)
+
+ def forward(self, static_features, layer0_input=None):
+ # Layer 0: Use full 8D static features (includes bias)
+ x0 = self.layer0(static_features)
+ x0 = F.relu(x0)
+
+ # Layer 1: Concatenate static + layer0 output
+ x1_input = torch.cat([static_features, x0], dim=1)
+ x1 = self.layer1(x1_input)
+ x1 = F.relu(x1)
+
+ # Layer 2: Concatenate static + layer1 output
+ x2_input = torch.cat([static_features, x1], dim=1)
+ output = self.layer2(x2_input)
+
+ return torch.sigmoid(output) # RGBA output [0,1]
+```
+
+**Training Configuration:**
+
+```python
+# Hyperparameters
+kernels = [1, 3, 5] # Per-layer kernel sizes
+channels = [16, 8, 4] # Per-layer output channels
+learning_rate = 1e-3
+batch_size = 16
+epochs = 5000
+
+# Training loop (standard PyTorch f32)
+for epoch in range(epochs):
+ for rgb_batch, depth_batch, target_batch in dataloader:
+ # Compute static features
+ static_feat = compute_static_features(rgb_batch, depth_batch)
+
+ # Forward pass
+ output = model(static_feat)
+ loss = criterion(output, target_batch)
+
+ # Backward pass
+ optimizer.zero_grad()
+ loss.backward()
+ optimizer.step()
+```
+
+**Checkpoint Format:**
+
+```python
+torch.save({
+ 'state_dict': model.state_dict(), # f32 weights
+ 'config': {
+ 'kernels': [1, 3, 5],
+ 'channels': [16, 8, 4],
+ 'features': ['R', 'G', 'B', 'D', 'uv.x', 'uv.y', 'sin10_x', 'bias']
+ },
+ 'epoch': epoch,
+ 'loss': loss.item()
+}, f'checkpoints/checkpoint_epoch_{epoch}.pth')
+```
+
+---
+
+## Export Workflow
+
+### Script: `training/export_cnn_v2_shader.py`
+
+**Process:**
+1. Load checkpoint (f32 PyTorch weights)
+2. Extract layer configs (kernels, channels)
+3. Quantize weights to float16: `weights_f16 = weights_f32.astype(np.float16)`
+4. Generate WGSL shader per layer
+5. Write to `workspaces/<workspace>/shaders/cnn_v2_*.wgsl`
+
+**Example Generated Shader:**
+
+```wgsl
+// cnn_v2_layer_0.wgsl - Auto-generated from checkpoint_epoch_5000.pth
+
+const KERNEL_SIZE: u32 = 1u;
+const IN_CHANNELS: u32 = 8u; // 7 features + bias
+const OUT_CHANNELS: u32 = 16u;
+
+// Weights quantized to float16 (stored as f32 in shader)
+const weights: array<f32, 128> = array(
+ 0.123047, -0.089844, 0.234375, 0.456055, ...
+);
+
+@group(0) @binding(0) var static_features: texture_2d<u32>;
+@group(0) @binding(1) var output_texture: texture_storage_2d<rgba32uint, write>;
+
+@compute @workgroup_size(8, 8)
+fn main(@builtin(global_invocation_id) id: vec3<u32>) {
+ // Load static features (8D)
+ let static_feat = get_static_features(vec2<i32>(id.xy));
+
+ // Convolution (1×1 kernel = pointwise)
+ var output: array<f32, OUT_CHANNELS>;
+ for (var c: u32 = 0u; c < OUT_CHANNELS; c++) {
+ var sum: f32 = 0.0;
+ for (var k: u32 = 0u; k < IN_CHANNELS; k++) {
+ sum += weights[c * IN_CHANNELS + k] * static_feat[k];
+ }
+ output[c] = max(0.0, sum); // ReLU activation
+ }
+
+ // Pack and store (8×f16 per texel)
+ textureStore(output_texture, vec2<i32>(id.xy), pack_f16x8(output));
+}
+```
+
+**Float16 Quantization:**
+- Training uses f32 throughout (PyTorch standard)
+- Export converts to np.float16, then back to f32 for WGSL literals
+- **Expected discrepancy:** <0.1% MSE (acceptable)
+- Validation via `validate_cnn_v2.sh` compares outputs
+
+---
+
+## Validation Workflow
+
+### Script: `scripts/validate_cnn_v2.sh`
+
+**End-to-end pipeline:**
+```bash
+./scripts/validate_cnn_v2.sh checkpoints/checkpoint_epoch_5000.pth
+```
+
+**Steps automated:**
+1. Export checkpoint → .wgsl shaders
+2. Rebuild `cnn_test` tool
+3. Process test images with CNN v2
+4. Display input/output results
+
+**Usage:**
+```bash
+# Basic usage
+./scripts/validate_cnn_v2.sh checkpoint.pth
+
+# Custom paths
+./scripts/validate_cnn_v2.sh checkpoint.pth \
+ -i my_test_images/ \
+ -o results/ \
+ -b build_release
+
+# Skip rebuild (iterate on checkpoint only)
+./scripts/validate_cnn_v2.sh checkpoint.pth --skip-build
+
+# Skip export (iterate on test images only)
+./scripts/validate_cnn_v2.sh checkpoint.pth --skip-export
+
+# Show help
+./scripts/validate_cnn_v2.sh --help
+```
+
+**Options:**
+- `-b, --build-dir DIR` - Build directory (default: build)
+- `-w, --workspace NAME` - Workspace name (default: main)
+- `-i, --images DIR` - Test images directory (default: training/validation)
+- `-o, --output DIR` - Output directory (default: validation_results)
+- `--skip-build` - Use existing cnn_test binary
+- `--skip-export` - Use existing .wgsl shaders
+- `-h, --help` - Show full usage
+
+**Output:**
+- Input images: `<test_images_dir>/*.png`
+- Output images: `<output_dir>/*_output.png`
+- Opens results directory in system file browser
+
+---
+
+## Implementation Checklist
+
+### Phase 1: Shaders (Core Infrastructure)
+
+- [ ] `workspaces/main/shaders/cnn_v2_static.wgsl` - Static features compute
+ - [ ] RGBD sampling from framebuffer
+ - [ ] UV coordinate calculation
+ - [ ] sin(10\*uv.x) computation
+ - [ ] Bias dimension (constant 1.0)
+ - [ ] Float16 packing via `pack2x16float()`
+ - [ ] Output to `texture_storage_2d<rgba32uint>`
+
+- [ ] `workspaces/main/shaders/cnn_v2_layer_template.wgsl` - Layer template
+ - [ ] Static features unpacking
+ - [ ] Previous layer unpacking (8×f16)
+ - [ ] Convolution implementation (1×1, 3×3, 5×5)
+ - [ ] ReLU activation
+ - [ ] Output packing (8×f16)
+ - [ ] Proper padding handling
+
+### Phase 2: C++ Effect Class
+
+- [ ] `src/gpu/effects/cnn_v2_effect.h` - Header
+ - [ ] Class declaration inheriting from `PostProcessEffect`
+ - [ ] Static features texture member
+ - [ ] Layer textures vector
+ - [ ] Pipeline and bind group members
+
+- [ ] `src/gpu/effects/cnn_v2_effect.cc` - Implementation
+ - [ ] Constructor: Load shaders, create textures
+ - [ ] `init()`: Create pipelines, bind groups
+ - [ ] `render()`: Multi-pass execution
+ - [ ] Pass 0: Compute static features
+ - [ ] Pass 1-N: CNN layers
+ - [ ] Final: Composite to output
+ - [ ] Proper resource cleanup
+
+- [ ] Integration
+ - [ ] Add to `src/gpu/demo_effects.h` includes
+ - [ ] Add `cnn_v2_effect.cc` to `CMakeLists.txt` (headless + normal)
+ - [ ] Add shaders to `workspaces/main/assets.txt`
+ - [ ] Add to `src/tests/gpu/test_demo_effects.cc`
+
+### Phase 3: Training Pipeline
+
+- [ ] `training/train_cnn_v2.py` - Training script
+ - [ ] Static feature extraction function
+ - [ ] CNNv2 PyTorch model class
+ - [ ] Patch-based dataloader
+ - [ ] Training loop with checkpointing
+ - [ ] Command-line argument parsing
+ - [ ] Inference mode (ground truth generation)
+
+- [ ] `training/export_cnn_v2_shader.py` - Export script
+ - [ ] Checkpoint loading
+ - [ ] Weight extraction and f16 quantization
+ - [ ] Per-layer WGSL generation
+ - [ ] File output to workspace shaders/
+ - [ ] Metadata preservation
+
+### Phase 4: Tools & Validation
+
+- [ ] `scripts/validate_cnn_v2.sh` - End-to-end validation
+ - [ ] Command-line argument parsing
+ - [ ] Shader export orchestration
+ - [ ] Build orchestration
+ - [ ] Batch image processing
+ - [ ] Results display
+
+- [ ] `src/tools/cnn_test_main.cc` - Tool updates
+ - [ ] Add `--cnn-version v2` flag
+ - [ ] CNNv2Effect instantiation path
+ - [ ] Static features pass execution
+ - [ ] Multi-layer processing
+
+### Phase 5: Documentation
+
+- [ ] `doc/HOWTO.md` - Usage guide
+ - [ ] Training section (CNN v2)
+ - [ ] Export section
+ - [ ] Validation section
+ - [ ] Examples
+
+- [ ] `README.md` - Project overview update
+ - [ ] Mention CNN v2 capability
+
+---
+
+## File Structure
+
+### New Files
+
+```
+# Shaders (generated by export script)
+workspaces/main/shaders/cnn_v2_static.wgsl # Static features compute
+workspaces/main/shaders/cnn_v2_layer_0.wgsl # Input layer (generated)
+workspaces/main/shaders/cnn_v2_layer_1.wgsl # Inner layer (generated)
+workspaces/main/shaders/cnn_v2_layer_2.wgsl # Output layer (generated)
+
+# C++ implementation
+src/gpu/effects/cnn_v2_effect.h # Effect class header
+src/gpu/effects/cnn_v2_effect.cc # Effect implementation
+
+# Python training/export
+training/train_cnn_v2.py # Training script
+training/export_cnn_v2_shader.py # Shader generator
+training/validation/ # Test images directory
+
+# Scripts
+scripts/validate_cnn_v2.sh # End-to-end validation
+
+# Documentation
+doc/CNN_V2.md # This file
+```
+
+### Modified Files
+
+```
+src/gpu/demo_effects.h # Add CNNv2Effect include
+CMakeLists.txt # Add cnn_v2_effect.cc
+workspaces/main/assets.txt # Add cnn_v2 shaders
+workspaces/main/timeline.seq # Optional: add CNNv2Effect
+src/tests/gpu/test_demo_effects.cc # Add CNNv2 test case
+src/tools/cnn_test_main.cc # Add --cnn-version v2
+doc/HOWTO.md # Add CNN v2 sections
+TODO.md # Add CNN v2 task
+```
+
+### Unchanged (v1 Preserved)
+
+```
+training/train_cnn.py # Original training
+src/gpu/effects/cnn_effect.* # Original effect
+workspaces/main/shaders/cnn_*.wgsl # Original shaders
+```
+
+---
+
+## Performance Characteristics
+
+### Static Features Compute
+- **Cost:** ~0.1ms @ 1080p
+- **Frequency:** Once per frame
+- **Operations:** sin(), texture sampling, packing
+
+### CNN Layers (Example 3-layer)
+- **Layer0 (1×1, 8→16):** ~0.3ms
+- **Layer1 (3×3, 23→8):** ~0.8ms
+- **Layer2 (5×5, 15→4):** ~1.2ms
+- **Total:** ~2.4ms @ 1080p
+
+### Memory Usage
+- Static features: 1920×1080×8×2 = 33 MB (f16)
+- Layer buffers: 1920×1080×16×2 = 66 MB (max 16 channels)
+- Weights: ~6.4 KB (f16, in shader code)
+- **Total GPU memory:** ~100 MB
+
+---
+
+## Size Budget
+
+### CNN v1 vs v2
+
+| Metric | v1 | v2 | Delta |
+|--------|----|----|-------|
+| Weights (count) | 800 | 3268 | +2468 |
+| Storage (f32) | 3.2 KB | 13.1 KB | +9.9 KB |
+| Storage (f16) | N/A | 6.5 KB | +6.5 KB |
+| Shader code | ~500 lines | ~800 lines | +300 lines |
+
+### Mitigation Strategies
+
+**Reduce channels:**
+- [16,8,4] → [8,4,4] saves ~50% weights
+- [16,8,4] → [4,4,4] saves ~60% weights
+
+**Smaller kernels:**
+- [1,3,5] → [1,3,3] saves ~30% weights
+- [1,3,5] → [1,1,3] saves ~50% weights
+
+**Quantization:**
+- int8 weights: saves 75% (requires QAT training)
+- 4-bit weights: saves 87.5% (extreme, needs research)
+
+**Target:** Keep CNN v2 under 10 KB for 64k demo constraint
+
+---
+
+## Future Extensions
+
+### More Features (uint8 Packing)
+
+```wgsl
+// 16 uint8 features per texel (texture_storage_2d<rgba8unorm>)
+// [R, G, B, D, uv.x, uv.y, sin10.x, sin10.y,
+// sin20.x, sin20.y, dx, dy, gray_mip1, gray_mip2, variance, bias]
+```
+- Trade precision for quantity
+- Requires quantization-aware training
+
+### Temporal Features
+
+- Previous frame RGBA (motion awareness)
+- Optical flow vectors
+- Requires multi-frame buffer
+
+### Learned Position Encodings
+
+- Replace hand-crafted sin(10\*uv) with learned embeddings
+- Requires separate embedding network
+- Similar to NeRF position encoding
+
+### Dynamic Architecture
+
+- Runtime kernel size selection based on scene
+- Conditional layer execution (skip connections)
+- Layer pruning for performance
+
+---
+
+## References
+
+- **v1 Implementation:** `src/gpu/effects/cnn_effect.*`
+- **Training Guide:** `doc/HOWTO.md` (CNN Training section)
+- **Test Tool:** `doc/CNN_TEST_TOOL.md`
+- **Shader System:** `doc/SEQUENCE.md`
+- **Size Measurement:** `doc/SIZE_MEASUREMENT.md`
+
+---
+
+## Appendix: Design Decisions
+
+### Why Bias as Static Feature?
+
+**Alternatives considered:**
+1. Separate bias array per layer (Option B)
+2. Bias as static feature = 1.0 (Option A, chosen)
+
+**Decision rationale:**
+- Simpler shader code (fewer bindings)
+- Standard NN formulation (augmented input)
+- Saves 56-112 bytes per model
+- 7 features sufficient for v1 implementation
+- Can extend to uint8 packing if >7 features needed
+
+### Why Float16 for Weights?
+
+**Alternatives considered:**
+1. Keep f32 (larger, more accurate)
+2. Use f16 (smaller, GPU-native)
+3. Use int8 (smallest, needs QAT)
+
+**Decision rationale:**
+- f16 saves 50% vs f32 (critical for 64k target)
+- GPU-native support (pack2x16float in WGSL)
+- <0.1% accuracy loss (acceptable)
+- Simpler than int8 quantization
+
+### Why Multi-Frequency Position Encoding?
+
+**Inspiration:** NeRF (Neural Radiance Fields)
+
+**Benefits:**
+- Helps network learn high-frequency details
+- Better than raw UV coordinates
+- Small footprint (1D per frequency)
+
+**Future:** Add sin(20\*uv), sin(40\*uv) if >7 features available
+
+---
+
+**Document Version:** 1.0
+**Last Updated:** 2026-02-12
+**Status:** Design approved, ready for implementation
diff --git a/doc/HOWTO.md b/doc/HOWTO.md
index 7b0daa0..2b896ab 100644
--- a/doc/HOWTO.md
+++ b/doc/HOWTO.md
@@ -130,10 +130,27 @@ Processes entire image with sliding window (matches WGSL):
**Kernel sizes:** 3×3 (36 weights), 5×5 (100 weights), 7×7 (196 weights)
+### CNN v2 Validation
+
+End-to-end testing: checkpoint → shaders → build → test images → results
+
+```bash
+./scripts/validate_cnn_v2.sh checkpoints/checkpoint_epoch_5000.pth
+
+# Options:
+# -i DIR Test images directory (default: training/validation)
+# -o DIR Output directory (default: validation_results)
+# --skip-build Use existing cnn_test binary
+# -h Show all options
+```
+
+See `scripts/validate_cnn_v2.sh --help` for full usage. See `doc/CNN_V2.md` for design details.
+
---
## Timeline
+### Manual Editing
Edit `workspaces/main/timeline.seq`:
```text
SEQUENCE 0.0 0
@@ -141,6 +158,12 @@ SEQUENCE 0.0 0
```
Rebuild to apply. See `doc/SEQUENCE.md`.
+### Visual Editor
+```bash
+open tools/timeline_editor/index.html
+```
+Features: Drag/drop, beat-based editing, audio playback, waveform visualization, snap-to-beat. See `tools/timeline_editor/README.md`.
+
---
## Audio
diff --git a/go.sh b/go.sh
deleted file mode 100644
index e6ad52b..0000000
--- a/go.sh
+++ /dev/null
@@ -1,10 +0,0 @@
-#!/bin/sh
-
-# ./training/train_cnn.py --layers 3 --kernel_sizes 3,3,3 --epochs 10000 --batch_size 16 --input training/input/ --target training/target_2/ --checkpoint-every 1000
-./training/train_cnn.py --export-only training/checkpoints/checkpoint_epoch_2000.pth
-./training/train_cnn.py --export-only training/checkpoints/checkpoint_epoch_2000.pth --infer training/input/img_001.png --output test/toto.png
-./training/train_cnn.py --export-only training/checkpoints/checkpoint_epoch_2000.pth \
- --infer training/input/img_001.png \
- --output output/ref/toto0.png --save-intermediates output/ref/
-./build/cnn_test training/input/img_001.png output/toto.png --save-intermediates output/
-open output/*
diff --git a/output/layer_0.png b/output/layer_0.png
deleted file mode 100644
index 5e66a7f..0000000
--- a/output/layer_0.png
+++ /dev/null
Binary files differ
diff --git a/output/layer_1.png b/output/layer_1.png
deleted file mode 100644
index 3fc7102..0000000
--- a/output/layer_1.png
+++ /dev/null
Binary files differ
diff --git a/output/ref/layer_0.png b/output/ref/layer_0.png
deleted file mode 100644
index b518ce0..0000000
--- a/output/ref/layer_0.png
+++ /dev/null
Binary files differ
diff --git a/output/ref/layer_1.png b/output/ref/layer_1.png
deleted file mode 100644
index 91e5b9c..0000000
--- a/output/ref/layer_1.png
+++ /dev/null
Binary files differ
diff --git a/output/toto.png b/output/toto.png
deleted file mode 100644
index b5fb086..0000000
--- a/output/toto.png
+++ /dev/null
Binary files differ
diff --git a/output/toto0.png b/output/toto0.png
deleted file mode 100644
index f970b84..0000000
--- a/output/toto0.png
+++ /dev/null
Binary files differ
diff --git a/scripts/validate_cnn_v2.sh b/scripts/validate_cnn_v2.sh
new file mode 100755
index 0000000..fcd9908
--- /dev/null
+++ b/scripts/validate_cnn_v2.sh
@@ -0,0 +1,198 @@
+#!/bin/bash
+# Validate CNN v2: Export checkpoint → Build → Test → Display results
+
+set -e
+
+# Default paths
+BUILD_DIR="build"
+WORKSPACE="main"
+TEST_IMAGES_DIR="training/validation"
+OUTPUT_DIR="validation_results"
+PYTHON="python3"
+
+# Colors
+RED='\033[0;31m'
+GREEN='\033[0;32m'
+YELLOW='\033[1;33m'
+NC='\033[0m'
+
+print_usage() {
+ cat << EOF
+Usage: $0 CHECKPOINT [OPTIONS]
+
+End-to-end CNN v2 validation: export shaders, rebuild, test images, show results.
+
+Arguments:
+ CHECKPOINT Path to .pth checkpoint file (required)
+
+Options:
+ -b, --build-dir DIR Build directory (default: build)
+ -w, --workspace NAME Workspace name (default: main)
+ -i, --images DIR Test images directory (default: training/validation)
+ -o, --output DIR Output directory (default: validation_results)
+ --python CMD Python command (default: python3)
+ --skip-build Skip cnn_test rebuild
+ --skip-export Skip shader export (use existing .wgsl)
+ -h, --help Show this help
+
+Example:
+ $0 checkpoints/checkpoint_epoch_5000.pth
+ $0 checkpoint.pth -i my_test_images/ -o results/
+ $0 checkpoint.pth --skip-build # Use existing cnn_test binary
+
+EOF
+}
+
+log() { echo -e "${GREEN}[validate]${NC} $*"; }
+warn() { echo -e "${YELLOW}[validate]${NC} $*"; }
+error() { echo -e "${RED}[validate]${NC} $*" >&2; exit 1; }
+
+# Parse arguments
+CHECKPOINT=""
+SKIP_BUILD=false
+SKIP_EXPORT=false
+
+while [[ $# -gt 0 ]]; do
+ case $1 in
+ -h|--help)
+ print_usage
+ exit 0
+ ;;
+ -b|--build-dir)
+ BUILD_DIR="$2"
+ shift 2
+ ;;
+ -w|--workspace)
+ WORKSPACE="$2"
+ shift 2
+ ;;
+ -i|--images)
+ TEST_IMAGES_DIR="$2"
+ shift 2
+ ;;
+ -o|--output)
+ OUTPUT_DIR="$2"
+ shift 2
+ ;;
+ --python)
+ PYTHON="$2"
+ shift 2
+ ;;
+ --skip-build)
+ SKIP_BUILD=true
+ shift
+ ;;
+ --skip-export)
+ SKIP_EXPORT=true
+ shift
+ ;;
+ -*)
+ error "Unknown option: $1"
+ ;;
+ *)
+ if [[ -z "$CHECKPOINT" ]]; then
+ CHECKPOINT="$1"
+ else
+ error "Unexpected argument: $1"
+ fi
+ shift
+ ;;
+ esac
+done
+
+# Validate inputs
+[[ -z "$CHECKPOINT" ]] && error "Checkpoint file required. Use -h for help."
+[[ ! -f "$CHECKPOINT" ]] && error "Checkpoint not found: $CHECKPOINT"
+[[ ! -d "$TEST_IMAGES_DIR" ]] && error "Test images directory not found: $TEST_IMAGES_DIR"
+
+SHADER_DIR="workspaces/$WORKSPACE/shaders"
+CNN_TEST="$BUILD_DIR/cnn_test"
+
+log "Configuration:"
+log " Checkpoint: $CHECKPOINT"
+log " Build dir: $BUILD_DIR"
+log " Workspace: $WORKSPACE"
+log " Shader dir: $SHADER_DIR"
+log " Test images: $TEST_IMAGES_DIR"
+log " Output dir: $OUTPUT_DIR"
+echo
+
+# Step 1: Export shaders
+if [[ "$SKIP_EXPORT" = false ]]; then
+ log "Step 1/4: Exporting shaders from checkpoint..."
+ [[ ! -d "$SHADER_DIR" ]] && error "Shader directory not found: $SHADER_DIR"
+
+ if [[ ! -f "training/export_cnn_v2_shader.py" ]]; then
+ error "Export script not found: training/export_cnn_v2_shader.py"
+ fi
+
+ $PYTHON training/export_cnn_v2_shader.py "$CHECKPOINT" --output-dir "$SHADER_DIR" \
+ || error "Shader export failed"
+
+ log "✓ Shaders exported to $SHADER_DIR"
+else
+ warn "Skipping shader export (using existing .wgsl files)"
+fi
+
+# Step 2: Rebuild cnn_test
+if [[ "$SKIP_BUILD" = false ]]; then
+ log "Step 2/4: Rebuilding cnn_test..."
+
+ cmake --build "$BUILD_DIR" -j4 --target cnn_test \
+ || error "Build failed"
+
+ log "✓ Built $CNN_TEST"
+else
+ warn "Skipping build (using existing binary)"
+fi
+
+[[ ! -x "$CNN_TEST" ]] && error "cnn_test not found or not executable: $CNN_TEST"
+
+# Step 3: Process test images
+log "Step 3/4: Processing test images..."
+mkdir -p "$OUTPUT_DIR"
+
+# Find PNG images
+mapfile -t IMAGES < <(find "$TEST_IMAGES_DIR" -maxdepth 1 -name "*.png" | sort)
+[[ ${#IMAGES[@]} -eq 0 ]] && error "No PNG images found in $TEST_IMAGES_DIR"
+
+log "Found ${#IMAGES[@]} test image(s)"
+
+for img in "${IMAGES[@]}"; do
+ basename=$(basename "$img" .png)
+ output="$OUTPUT_DIR/${basename}_output.png"
+
+ log " Processing $basename.png..."
+ "$CNN_TEST" "$img" "$output" --cnn-version v2 \
+ || warn " Failed: $basename.png"
+done
+
+log "✓ Processed ${#IMAGES[@]} image(s)"
+
+# Step 4: Display results
+log "Step 4/4: Opening results..."
+
+case "$(uname -s)" in
+ Darwin*)
+ open "$OUTPUT_DIR"
+ ;;
+ Linux*)
+ if command -v xdg-open &> /dev/null; then
+ xdg-open "$OUTPUT_DIR"
+ else
+ log "Results saved to: $OUTPUT_DIR"
+ fi
+ ;;
+ MINGW*|MSYS*|CYGWIN*)
+ explorer "$OUTPUT_DIR"
+ ;;
+ *)
+ log "Results saved to: $OUTPUT_DIR"
+ ;;
+esac
+
+log "✓ Validation complete!"
+log ""
+log "Results:"
+log " Input: $TEST_IMAGES_DIR/*.png"
+log " Output: $OUTPUT_DIR/*_output.png"
diff --git a/test/toto.png b/test/toto.png
deleted file mode 100644
index de86e19..0000000
--- a/test/toto.png
+++ /dev/null
Binary files differ
diff --git a/test_passthrough.wgsl b/test_passthrough.wgsl
deleted file mode 100644
index 1e5f52a..0000000
--- a/test_passthrough.wgsl
+++ /dev/null
@@ -1,10 +0,0 @@
-@vertex fn vs_main(@builtin(vertex_index) i: u32) -> @builtin(position) vec4<f32> {
- var pos = array<vec2<f32>, 3>(
- vec2<f32>(-1.0, -1.0), vec2<f32>(3.0, -1.0), vec2<f32>(-1.0, 3.0)
- );
- return vec4<f32>(pos[i], 0.0, 1.0);
-}
-
-@fragment fn fs_main(@builtin(position) p: vec4<f32>) -> @location(0) vec4<f32> {
- return vec4<f32>(1.0, 0.0, 0.0, 1.0); // Solid red
-}
diff --git a/tmp/layer_0.png b/tmp/layer_0.png
deleted file mode 100644
index 9e2e35c..0000000
--- a/tmp/layer_0.png
+++ /dev/null
Binary files differ
diff --git a/tmp/layer_1.png b/tmp/layer_1.png
deleted file mode 100644
index 16b1a28..0000000
--- a/tmp/layer_1.png
+++ /dev/null
Binary files differ
diff --git a/tools/timeline_editor/README.md b/tools/timeline_editor/README.md
index 6e368cf..cc13a41 100644
--- a/tools/timeline_editor/README.md
+++ b/tools/timeline_editor/README.md
@@ -14,28 +14,34 @@ Interactive web-based editor for `timeline.seq` files.
- ⚙️ Stack-order based priority system
- 🔍 Zoom (10%-200%) with mouse wheel + Ctrl/Cmd
- 🎵 Audio waveform visualization (aligned to beats)
-- 🎼 Snap-to-beat mode (enabled by default)
+- 🎼 Quantize grid (Off, 1/32, 1/16, 1/8, 1/4, 1/2, 1 beat)
- 🎛️ BPM slider (60-200 BPM)
- 🔄 Re-order sequences by time
- 🗑️ Delete sequences/effects
-- ▶️ **Audio playback with auto-expand/collapse** (NEW)
-- 🎚️ **Sticky audio track and timeline ticks** (NEW)
+- ▶️ Audio playback with auto-expand/collapse
+- 🎚️ Sticky audio track and timeline ticks
+- 🔴 **Playback indicator on waveform** (NEW)
+- 🎯 **Double-click seek during playback** (NEW)
+- 📍 **Click waveform to seek** (NEW)
## Usage
1. **Open:** `open tools/timeline_editor/index.html` or double-click in browser
2. **Load timeline:** Click "📂 Load timeline.seq" → select `workspaces/main/timeline.seq`
3. **Load audio:** Click "🎵 Load Audio (WAV)" → select audio file
+4. **Auto-load via URL:** `index.html?seq=timeline.seq&wav=audio.wav`
4. **Playback:**
- Click "▶ Play" or press **Spacebar** to play/pause
- - Click waveform to seek
+ - Click waveform to seek to position
+ - **Double-click timeline** to seek during playback (continues playing)
- Watch sequences auto-expand/collapse during playback
- - Red playback indicator shows current position
+ - Red playback indicators on both timeline and waveform show current position
5. **Edit:**
- - Drag sequences/effects to reposition
- - Double-click sequence header to collapse/expand
+ - Drag sequences/effects to reposition (works when collapsed or expanded)
+ - Double-click anywhere on sequence to collapse/expand
- Click item to edit properties in side panel
- Drag effect handles to resize
+ - **Quantize:** Use dropdown or hotkeys (0-6) to snap to grid
6. **Zoom:** Ctrl/Cmd + mouse wheel (zooms at cursor position)
7. **Save:** Click "💾 Save timeline.seq"
@@ -78,9 +84,28 @@ SEQUENCE 2.5s 0 "Explicit seconds" # Rare: start at 2.5 physical seconds
EFFECT + Fade 0 4 # Still uses beats for duration
```
+## URL Parameters
+
+Auto-load files on page load:
+```
+index.html?seq=../../workspaces/main/timeline.seq&wav=../../audio/track.wav
+```
+
+**Parameters:**
+- `seq` - Path to `.seq` file (relative or absolute URL)
+- `wav` - Path to `.wav` audio file (relative or absolute URL)
+
+**Example:**
+```bash
+open "tools/timeline_editor/index.html?seq=../../workspaces/main/timeline.seq"
+```
+
## Keyboard Shortcuts
- **Spacebar**: Play/pause audio playback
+- **0-6**: Quantize grid (0=Off, 1=1beat, 2=1/2, 3=1/4, 4=1/8, 5=1/16, 6=1/32)
+- **Double-click timeline**: Seek to position (continues playing if active)
+- **Double-click sequence**: Collapse/expand
- **Ctrl/Cmd + Wheel**: Zoom in/out at cursor position
## Technical Notes
@@ -91,9 +116,12 @@ SEQUENCE 2.5s 0 "Explicit seconds" # Rare: start at 2.5 physical seconds
- BPM used for seconds conversion (tooltips, audio waveform alignment)
- Priority determines render order (higher = on top)
- Collapsed sequences show 35px title bar, expanded show full effect stack
-- Time markers show beats by default (4-beat/bar increments)
+- **Show Beats** toggle: Switch time markers between beats and seconds
+- Time markers show 4-beat/bar increments (beats) or 1s increments (seconds)
- **Waveform and time markers are sticky** at top during scroll/zoom
- Vertical grid lines aid alignment
-- Snap-to-beat enabled by default for musical alignment
+- **Quantize grid**: Independent snap control (works in both beat and second display modes)
- **Auto-expand/collapse**: Active sequence expands during playback, previous collapses
- **Auto-scroll**: Timeline follows playback indicator (keeps it in middle third of viewport)
+- **Dual playback indicators**: Red bars on both timeline and waveform (synchronized)
+- **Seamless seek**: Double-click or waveform click seeks without stopping playback
diff --git a/tools/timeline_editor/ROADMAP.md b/tools/timeline_editor/ROADMAP.md
index 216adbf..b14a73b 100644
--- a/tools/timeline_editor/ROADMAP.md
+++ b/tools/timeline_editor/ROADMAP.md
@@ -8,30 +8,22 @@ This document outlines planned enhancements for the interactive timeline editor.
### Audio Playback Integration Issues
-1. **Audio waveform doesn't scale with zoom nor follow timeline**
- - Waveform should horizontally sync with timeline ticks/sequences
- - Should scale to match `pixelsPerSecond` zoom level
- - Currently remains static regardless of zoom
+1. ~~**Audio waveform doesn't scale with zoom nor follow timeline**~~ ✅ FIXED
+ - Waveform now correctly syncs with timeline at all zoom levels
-2. **Playback indicator doesn't follow zoom and height issues**
- - Vertical red bar position calculation doesn't account for `pixelsPerSecond`
- - Doesn't reach bottom when sequences have scrolled
- - Needs to span full `timeline-content` height dynamically
+2. ~~**Playback indicator doesn't follow zoom and height issues**~~ ✅ FIXED
+ - Red bar now dynamically spans full timeline height
+ - Position correctly accounts for pixelsPerSecond
-3. **Sequences overlap timeline at scroll origin**
- - Some sequences still go behind timeline ticks
- - Notably when wheel pans back to beginning (scrollLeft = 0)
- - Need proper clipping or z-index management
+3. ~~**Sequences overlap timeline at scroll origin**~~ ✅ FIXED
+ - Proper padding prevents overlap with timeline border
-4. **Timeline and waveform should be fixed, not floating**
- - Currently using sticky positioning
- - Should use true fixed positioning at top
- - Should remain stationary regardless of scroll
+4. ~~**Timeline and waveform should be fixed, not floating**~~ ✅ FIXED
+ - Sticky header stays at top during scroll
-5. **Status indicator causes reflow**
- - Green status text appears/disappears causing layout shift
- - Should be relocated to top or bottom as fixed/always-visible
- - Prevents jarring reflow when messages appear
+5. ~~**Status indicator causes reflow**~~ ✅ FIXED
+ - Messages now fixed positioned at top-right
+ - No layout shift when appearing/disappearing
---
diff --git a/tools/timeline_editor/index.html b/tools/timeline_editor/index.html
index c9385ad..45c9f1f 100644
--- a/tools/timeline_editor/index.html
+++ b/tools/timeline_editor/index.html
@@ -5,492 +5,100 @@
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Timeline Editor - timeline.seq</title>
<style>
- * {
- margin: 0;
- padding: 0;
- box-sizing: border-box;
+ :root {
+ --bg-dark: #1e1e1e;
+ --bg-medium: #252526;
+ --bg-light: #3c3c3c;
+ --text-primary: #d4d4d4;
+ --text-muted: #858585;
+ --accent-blue: #0e639c;
+ --accent-blue-hover: #1177bb;
+ --accent-green: #4ec9b0;
+ --accent-orange: #ce9178;
+ --accent-red: #f48771;
+ --border-color: #858585;
+ --gap: 10px;
+ --radius: 4px;
}
- body {
- font-family: 'Segoe UI', Tahoma, Geneva, Verdana, sans-serif;
- background: #1e1e1e;
- color: #d4d4d4;
- padding: 20px;
- margin: 0;
- min-height: 100vh;
- box-sizing: border-box;
- }
-
- .container {
- max-width: 100%;
- width: 100%;
- margin: 0 auto;
- box-sizing: border-box;
- }
+ * { margin: 0; padding: 0; box-sizing: border-box; }
+ body { font-family: 'Segoe UI', Tahoma, Geneva, Verdana, sans-serif; background: var(--bg-dark); color: var(--text-primary); padding: 20px; min-height: 100vh; }
+ .container { max-width: 100%; width: 100%; margin: 0 auto; }
- header {
- background: #252526;
- padding: 20px;
- border-radius: 8px;
- margin-bottom: 20px;
- display: flex;
- align-items: center;
- justify-content: space-between;
- gap: 20px;
- flex-wrap: wrap;
- }
+ header { background: var(--bg-medium); padding: 20px; border-radius: 8px; margin-bottom: 20px; display: flex; align-items: center; justify-content: space-between; gap: 20px; flex-wrap: wrap; }
+ h1 { color: var(--accent-green); white-space: nowrap; }
+ .controls { display: flex; gap: var(--gap); flex-wrap: wrap; align-items: center; }
+ .zoom-controls { display: flex; gap: var(--gap); flex-wrap: wrap; align-items: center; margin-bottom: var(--gap); }
- h1 {
- margin: 0;
- color: #4ec9b0;
- white-space: nowrap;
- }
-
- .controls {
- display: flex;
- gap: 10px;
- flex-wrap: wrap;
- align-items: center;
- }
-
- .checkbox-label {
- display: flex;
- align-items: center;
- gap: 8px;
- color: #d4d4d4;
- cursor: pointer;
- user-select: none;
- }
-
- .checkbox-label input[type="checkbox"] {
- cursor: pointer;
- }
-
- button {
- background: #0e639c;
- color: white;
- border: none;
- padding: 10px 20px;
- border-radius: 4px;
- cursor: pointer;
- font-size: 14px;
- }
-
- button:hover {
- background: #1177bb;
- }
-
- button:disabled {
- background: #3c3c3c;
- cursor: not-allowed;
- }
-
- input[type="file"] {
- display: none;
- }
-
- .file-label {
- background: #0e639c;
- color: white;
- padding: 10px 20px;
- border-radius: 4px;
- cursor: pointer;
- display: inline-block;
- }
-
- .file-label:hover {
- background: #1177bb;
- }
-
- .timeline-container {
- background: #252526;
- border-radius: 8px;
- padding: 20px;
- position: relative;
- height: calc(100vh - 280px);
- min-height: 500px;
- display: flex;
- flex-direction: column;
- }
-
- .timeline-content {
- flex: 1;
- overflow-x: auto;
- overflow-y: auto;
- position: relative;
- /* Hide scrollbars while keeping scroll functionality */
- scrollbar-width: none; /* Firefox */
- -ms-overflow-style: none; /* IE/Edge */
- }
-
- .timeline-content::-webkit-scrollbar {
- display: none; /* Chrome/Safari/Opera */
- }
-
- .timeline {
- position: relative;
- min-height: 100%;
- border-left: 2px solid #3c3c3c;
- }
-
- .sticky-header {
- position: relative;
- background: #252526;
- z-index: 100;
- padding-bottom: 10px;
- border-bottom: 2px solid #3c3c3c;
- flex-shrink: 0;
- }
-
- .playback-controls {
- display: flex;
- align-items: center;
- gap: 10px;
- padding: 10px 0;
- }
-
- #playPauseBtn {
- width: 60px;
- padding: 8px 12px;
- }
-
- #waveformCanvas {
- position: relative;
- height: 80px;
- width: 100%;
- background: rgba(0, 0, 0, 0.3);
- border-radius: 4px;
- cursor: crosshair;
- }
-
- .playback-indicator {
- position: absolute;
- top: 0;
- width: 2px;
- height: 100%;
- background: #f48771;
- box-shadow: 0 0 4px rgba(244, 135, 113, 0.8);
- pointer-events: none;
- z-index: 90;
- display: none;
- }
-
- .playback-indicator.playing {
- display: block;
- }
-
- .time-markers {
- position: relative;
- height: 30px;
- margin-top: 10px;
- border-bottom: 1px solid #3c3c3c;
- }
-
- .time-marker {
- position: absolute;
- top: 0;
- font-size: 12px;
- color: #858585;
- }
-
- .time-marker::before {
- content: '';
- position: absolute;
- left: 0;
- top: 20px;
- width: 1px;
- height: 10px;
- background: #3c3c3c;
- }
+ button, .file-label { background: var(--accent-blue); color: white; border: none; padding: 10px 20px; border-radius: var(--radius); cursor: pointer; font-size: 14px; display: inline-block; }
+ button:hover, .file-label:hover { background: var(--accent-blue-hover); }
+ button:disabled { background: var(--bg-light); cursor: not-allowed; }
+ input[type="file"] { display: none; }
- .time-marker::after {
- content: '';
- position: absolute;
- left: 0;
- top: 30px;
- width: 1px;
- height: 10000px;
- background: rgba(60, 60, 60, 0.2);
- pointer-events: none;
- }
+ .checkbox-label { display: flex; align-items: center; gap: 8px; cursor: pointer; user-select: none; }
+ .checkbox-label input[type="checkbox"] { cursor: pointer; }
- .sequence {
- position: absolute;
- background: #264f78;
- border: 2px solid #0e639c;
- border-radius: 4px;
- padding: 8px;
- cursor: move;
- min-height: 40px;
- transition: box-shadow 0.2s;
- }
+ .timeline-container { background: var(--bg-medium); border-radius: 8px; position: relative; height: calc(100vh - 280px); min-height: 500px; display: flex; flex-direction: column; }
+ .timeline-content { flex: 1; overflow: auto; position: relative; padding: 0 20px 20px 20px; scrollbar-width: none; -ms-overflow-style: none; }
+ .timeline-content::-webkit-scrollbar { display: none; }
+ .timeline { position: relative; min-height: 100%; border-left: 2px solid var(--bg-light); }
- .sequence:hover {
- box-shadow: 0 0 10px rgba(14, 99, 156, 0.5);
- }
+ .sticky-header { position: sticky; top: 0; background: var(--bg-medium); z-index: 100; padding: 20px 20px 10px 20px; border-bottom: 2px solid var(--bg-light); flex-shrink: 0; }
+ .waveform-container { position: relative; height: 80px; overflow: hidden; background: rgba(0, 0, 0, 0.3); border-radius: var(--radius); cursor: crosshair; }
+ #waveformCanvas { position: absolute; left: 0; top: 0; height: 80px; display: block; }
- .sequence.selected {
- border-color: #4ec9b0;
- box-shadow: 0 0 10px rgba(78, 201, 176, 0.5);
- }
+ .playback-indicator { position: absolute; top: 0; left: 0; width: 2px; background: var(--accent-red); box-shadow: 0 0 4px rgba(244, 135, 113, 0.8); pointer-events: none; z-index: 90; display: block; }
- .sequence.active-flash {
- animation: sequenceFlash 0.6s ease-out;
- }
+ .time-markers { position: relative; height: 30px; margin-top: var(--gap); border-bottom: 1px solid var(--bg-light); }
+ .time-marker { position: absolute; top: 0; font-size: 12px; color: var(--text-muted); }
+ .time-marker::before { content: ''; position: absolute; left: 0; top: 20px; width: 1px; height: 10px; background: var(--bg-light); }
+ .time-marker::after { content: ''; position: absolute; left: 0; top: 30px; width: 1px; height: 10000px; background: rgba(60, 60, 60, 0.2); pointer-events: none; }
+ .sequence { position: absolute; background: #264f78; border: 2px solid var(--accent-blue); border-radius: var(--radius); padding: 8px; cursor: move; min-height: 40px; transition: box-shadow 0.2s; }
+ .sequence:hover { box-shadow: 0 0 10px rgba(14, 99, 156, 0.5); }
+ .sequence.selected { border-color: var(--accent-green); box-shadow: 0 0 10px rgba(78, 201, 176, 0.5); }
+ .sequence.collapsed { overflow: hidden !important; background: #1a3a4a !important; }
+ .sequence.collapsed .sequence-name { display: none !important; }
+ .sequence.active-playing { border-color: var(--accent-green); background: #2a5f4a; }
+ .sequence.active-flash { animation: sequenceFlash 0.6s ease-out; }
@keyframes sequenceFlash {
- 0% {
- box-shadow: 0 0 20px rgba(78, 201, 176, 0.8);
- border-color: #4ec9b0;
- }
- 100% {
- box-shadow: 0 0 10px rgba(14, 99, 156, 0.5);
- border-color: #0e639c;
- }
- }
-
- .sequence-header {
- position: absolute;
- top: 0;
- left: 0;
- right: 0;
- padding: 8px;
- z-index: 5;
- cursor: pointer;
- user-select: none;
- }
-
- .sequence-header-name {
- font-size: 14px;
- font-weight: bold;
- color: #ffffff;
- }
-
- .sequence:not(.collapsed) .sequence-header-name {
- display: none;
- }
-
- .sequence.collapsed {
- overflow: hidden !important;
- background: #1a3a4a !important;
- }
-
- .sequence.collapsed .sequence-name {
- display: none !important;
- }
-
- .sequence-name {
- position: absolute;
- top: 50%;
- left: 50%;
- transform: translate(-50%, -50%);
- font-size: 24px;
- font-weight: bold;
- color: #ffffff;
- text-shadow: 2px 2px 8px rgba(0, 0, 0, 0.9),
- -1px -1px 4px rgba(0, 0, 0, 0.7);
- pointer-events: none;
- white-space: nowrap;
- opacity: 1;
- transition: opacity 0.3s ease;
- z-index: 10;
- }
-
- .sequence.hovered .sequence-name {
- opacity: 0;
- }
-
- .sequence-info {
- position: absolute;
- top: 8px;
- left: 8px;
- font-size: 11px;
- color: #858585;
- pointer-events: none;
- }
-
- .effect {
- position: absolute;
- background: #3a3d41;
- border: 1px solid #858585;
- border-radius: 3px;
- padding: 4px 8px;
- cursor: move;
- font-size: 11px;
- transition: box-shadow 0.2s;
- display: flex;
- align-items: center;
- white-space: nowrap;
- overflow: hidden;
- text-overflow: ellipsis;
- }
-
- .effect:hover {
- box-shadow: 0 0 8px rgba(133, 133, 133, 0.5);
- background: #45484d;
- }
-
- .effect.selected {
- border-color: #ce9178;
- box-shadow: 0 0 8px rgba(206, 145, 120, 0.5);
- }
-
- .effect small {
- font-size: 11px;
- color: #d4d4d4;
- }
-
- .effect-handle {
- position: absolute;
- top: 0;
- width: 6px;
- height: 100%;
- background: rgba(78, 201, 176, 0.8);
- cursor: ew-resize;
- display: none;
- z-index: 10;
- }
-
- .effect.selected .effect-handle {
- display: block;
- }
-
- .effect-handle.left {
- left: 0;
- border-radius: 3px 0 0 3px;
- }
-
- .effect-handle.right {
- right: 0;
- border-radius: 0 3px 3px 0;
- }
-
- .effect-handle:hover {
- background: rgba(78, 201, 176, 1);
- width: 8px;
- }
-
- .properties-panel {
- position: fixed;
- bottom: 20px;
- left: 20px;
- width: 350px;
- max-height: 80vh;
- background: #252526;
- padding: 15px;
- border-radius: 8px;
- box-shadow: 0 4px 12px rgba(0, 0, 0, 0.5);
- z-index: 1000;
- overflow-y: auto;
- transition: transform 0.3s ease;
- }
-
- .properties-panel.collapsed {
- transform: translateY(calc(100% + 40px));
- }
-
- .panel-header {
- display: flex;
- justify-content: space-between;
- align-items: center;
- margin-bottom: 15px;
- padding-bottom: 10px;
- border-bottom: 1px solid #3c3c3c;
- }
-
- .panel-header h2 {
- margin: 0;
- color: #4ec9b0;
- font-size: 16px;
+ 0% { box-shadow: 0 0 20px rgba(78, 201, 176, 0.8); border-color: var(--accent-green); }
+ 100% { box-shadow: 0 0 10px rgba(14, 99, 156, 0.5); border-color: var(--accent-blue); }
}
- .panel-toggle {
- background: transparent;
- border: 1px solid #858585;
- color: #d4d4d4;
- padding: 4px 8px;
- border-radius: 3px;
- cursor: pointer;
- font-size: 12px;
- }
+ .sequence-header { position: absolute; top: 0; left: 0; right: 0; padding: 8px; z-index: 5; cursor: move; user-select: none; }
+ .sequence-header-name { font-size: 14px; font-weight: bold; color: #ffffff; }
+ .sequence:not(.collapsed) .sequence-header-name { display: none; }
+ .sequence-name { position: absolute; top: 50%; left: 50%; transform: translate(-50%, -50%); font-size: 24px; font-weight: bold; color: #ffffff; text-shadow: 2px 2px 8px rgba(0, 0, 0, 0.9), -1px -1px 4px rgba(0, 0, 0, 0.7); pointer-events: none; white-space: nowrap; opacity: 1; transition: opacity 0.3s ease; z-index: 10; }
+ .sequence.hovered .sequence-name { opacity: 0; }
- .panel-toggle:hover {
- background: #3c3c3c;
- }
+ .effect { position: absolute; background: #3a3d41; border: 1px solid var(--border-color); border-radius: 3px; padding: 4px 8px; cursor: move; font-size: 11px; transition: box-shadow 0.2s; display: flex; align-items: center; white-space: nowrap; overflow: hidden; text-overflow: ellipsis; }
+ .effect:hover { box-shadow: 0 0 8px rgba(133, 133, 133, 0.5); background: #45484d; }
+ .effect.selected { border-color: var(--accent-orange); box-shadow: 0 0 8px rgba(206, 145, 120, 0.5); }
+ .effect-handle { position: absolute; top: 0; width: 6px; height: 100%; background: rgba(78, 201, 176, 0.8); cursor: ew-resize; display: none; z-index: 10; }
+ .effect.selected .effect-handle { display: block; }
+ .effect-handle.left { left: 0; border-radius: 3px 0 0 3px; }
+ .effect-handle.right { right: 0; border-radius: 0 3px 3px 0; }
+ .effect-handle:hover { background: var(--accent-green); width: 8px; }
- .panel-collapse-btn {
- position: fixed;
- bottom: 20px;
- left: 20px;
- background: #252526;
- border: 1px solid #858585;
- color: #d4d4d4;
- padding: 8px 12px;
- border-radius: 4px;
- cursor: pointer;
- z-index: 999;
- box-shadow: 0 2px 6px rgba(0, 0, 0, 0.3);
- display: none;
- }
+ .properties-panel { position: fixed; bottom: 20px; left: 20px; width: 350px; max-height: 80vh; background: var(--bg-medium); padding: 15px; border-radius: 8px; box-shadow: 0 4px 12px rgba(0, 0, 0, 0.5); z-index: 1000; overflow-y: auto; transition: transform 0.3s ease; }
+ .properties-panel.collapsed { transform: translateY(calc(100% + 40px)); }
+ .panel-header { display: flex; justify-content: space-between; align-items: center; margin-bottom: 15px; padding-bottom: 10px; border-bottom: 1px solid var(--bg-light); }
+ .panel-header h2 { margin: 0; color: var(--accent-green); font-size: 16px; }
+ .panel-toggle { background: transparent; border: 1px solid var(--border-color); color: var(--text-primary); padding: 4px 8px; border-radius: 3px; cursor: pointer; font-size: 12px; }
+ .panel-toggle:hover { background: var(--bg-light); }
+ .panel-collapse-btn { position: fixed; bottom: 20px; left: 20px; background: var(--bg-medium); border: 1px solid var(--border-color); color: var(--text-primary); padding: 8px 12px; border-radius: var(--radius); cursor: pointer; z-index: 999; box-shadow: 0 2px 6px rgba(0, 0, 0, 0.3); display: none; }
+ .panel-collapse-btn:hover { background: var(--bg-light); }
+ .panel-collapse-btn.visible { display: block; }
- .panel-collapse-btn:hover {
- background: #3c3c3c;
- }
+ .property-group { margin-bottom: 15px; }
+ .property-group label { display: block; margin-bottom: 5px; color: var(--text-muted); font-size: 14px; }
+ .property-group input, .property-group select { width: 100%; padding: 8px; background: var(--bg-light); border: 1px solid var(--border-color); border-radius: var(--radius); color: var(--text-primary); font-size: 14px; }
- .panel-collapse-btn.visible {
- display: block;
- }
-
- .property-group {
- margin-bottom: 15px;
- }
-
- .property-group label {
- display: block;
- margin-bottom: 5px;
- color: #858585;
- font-size: 14px;
- }
-
- .property-group input,
- .property-group select {
- width: 100%;
- padding: 8px;
- background: #3c3c3c;
- border: 1px solid #858585;
- border-radius: 4px;
- color: #d4d4d4;
- font-size: 14px;
- }
-
- .zoom-controls {
- margin-bottom: 10px;
- }
-
- .stats {
- background: #1e1e1e;
- padding: 10px;
- border-radius: 4px;
- margin-top: 10px;
- font-size: 12px;
- color: #858585;
- }
-
- .error {
- background: #5a1d1d;
- color: #f48771;
- padding: 10px;
- border-radius: 4px;
- margin-bottom: 10px;
- }
-
- .success {
- background: #1e5231;
- color: #89d185;
- padding: 10px;
- border-radius: 4px;
- margin-bottom: 10px;
- }
+ .stats { background: var(--bg-dark); padding: 10px; border-radius: var(--radius); margin-top: 10px; font-size: 12px; color: var(--text-muted); }
+ #messageArea { position: fixed; top: 80px; right: 20px; z-index: 2000; max-width: 400px; }
+ .error { background: #5a1d1d; color: var(--accent-red); padding: 10px; border-radius: var(--radius); box-shadow: 0 2px 8px rgba(0,0,0,0.3); }
+ .success { background: #1e5231; color: #89d185; padding: 10px; border-radius: var(--radius); box-shadow: 0 2px 8px rgba(0,0,0,0.3); }
</style>
</head>
<body>
@@ -498,15 +106,9 @@
<header>
<h1>📊 Timeline Editor</h1>
<div class="controls">
- <label class="file-label">
- 📂 Load timeline.seq
- <input type="file" id="fileInput" accept=".seq">
- </label>
+ <label class="file-label">📂 Load timeline.seq<input type="file" id="fileInput" accept=".seq"></label>
<button id="saveBtn" disabled>💾 Save timeline.seq</button>
- <label class="file-label">
- 🎵 Load Audio (WAV)
- <input type="file" id="audioInput" accept=".wav">
- </label>
+ <label class="file-label">🎵 Load Audio (WAV)<input type="file" id="audioInput" accept=".wav"></label>
<button id="clearAudioBtn" disabled>✖ Clear Audio</button>
<button id="addSequenceBtn" disabled>➕ Add Sequence</button>
<button id="deleteBtn" disabled>🗑️ Delete Selected</button>
@@ -520,20 +122,33 @@
<label style="margin-left: 20px">BPM: <input type="range" id="bpmSlider" min="60" max="200" value="120" step="1"></label>
<span id="currentBPM">120</span>
<label class="checkbox-label" style="margin-left: 20px">
- <input type="checkbox" id="showBeatsCheckbox" checked>
- Show Beats
+ <input type="checkbox" id="showBeatsCheckbox" checked>Show Beats
</label>
+ <label style="margin-left: 20px">Quantize:
+ <select id="quantizeSelect">
+ <option value="0">Off</option>
+ <option value="32">1/32</option>
+ <option value="16">1/16</option>
+ <option value="8">1/8</option>
+ <option value="4">1/4</option>
+ <option value="2">1/2</option>
+ <option value="1" selected>1 beat</option>
+ </select>
+ </label>
+ <div id="playbackControls" style="display: none; margin-left: 20px; gap: 10px; align-items: center;">
+ <span id="playbackTime">0.00s (0.00b)</span>
+ <button id="playPauseBtn">▶ Play</button>
+ </div>
</div>
<div id="messageArea"></div>
<div class="timeline-container">
<div class="sticky-header">
- <div class="playback-controls" id="playbackControls" style="display: none;">
- <button id="playPauseBtn">▶ Play</button>
- <span id="playbackTime">0.00s</span>
+ <div class="waveform-container" id="waveformContainer" style="display: none;">
+ <div class="playback-indicator" id="waveformPlaybackIndicator"></div>
+ <canvas id="waveformCanvas"></canvas>
</div>
- <canvas id="waveformCanvas" style="display: none;"></canvas>
<div class="time-markers" id="timeMarkers"></div>
</div>
<div class="timeline-content" id="timelineContent">
@@ -556,196 +171,130 @@
</div>
<script>
- // Global state
- let sequences = [];
- let currentFile = null;
- let selectedItem = null;
- let pixelsPerSecond = 100;
- let showBeats = true;
- let bpm = 120;
- let isDragging = false;
- let dragOffset = { x: 0, y: 0 };
- let lastActiveSeqIndex = -1;
- let isDraggingHandle = false;
- let handleType = null; // 'left' or 'right'
- let audioBuffer = null; // Decoded audio data
- let audioDuration = 0; // Duration in seconds
- let audioSource = null; // Current playback source
- let audioContext = null; // Audio context for playback
- let isPlaying = false;
- let playbackStartTime = 0; // When playback started (audioContext.currentTime)
- let playbackOffset = 0; // Offset into audio (seconds)
- let animationFrameId = null;
- let lastExpandedSeqIndex = -1;
+ // State
+ const state = {
+ sequences: [], currentFile: null, selectedItem: null, pixelsPerSecond: 100,
+ showBeats: true, quantizeUnit: 1, bpm: 120, isDragging: false, dragOffset: { x: 0, y: 0 },
+ lastActiveSeqIndex: -1, isDraggingHandle: false, handleType: null,
+ audioBuffer: null, audioDuration: 0, audioSource: null, audioContext: null,
+ isPlaying: false, playbackStartTime: 0, playbackOffset: 0, animationFrameId: null,
+ lastExpandedSeqIndex: -1, dragMoved: false
+ };
- // DOM elements
- const timeline = document.getElementById('timeline');
- const timelineContainer = document.querySelector('.timeline-container');
- const timelineContent = document.getElementById('timelineContent');
- const fileInput = document.getElementById('fileInput');
- const saveBtn = document.getElementById('saveBtn');
- const audioInput = document.getElementById('audioInput');
- const clearAudioBtn = document.getElementById('clearAudioBtn');
- const waveformCanvas = document.getElementById('waveformCanvas');
- const addSequenceBtn = document.getElementById('addSequenceBtn');
- const deleteBtn = document.getElementById('deleteBtn');
- const reorderBtn = document.getElementById('reorderBtn');
- const propertiesPanel = document.getElementById('propertiesPanel');
- const propertiesContent = document.getElementById('propertiesContent');
- const messageArea = document.getElementById('messageArea');
- const zoomSlider = document.getElementById('zoomSlider');
- const zoomLevel = document.getElementById('zoomLevel');
- const stats = document.getElementById('stats');
- const playPauseBtn = document.getElementById('playPauseBtn');
- const playbackControls = document.getElementById('playbackControls');
- const playbackTime = document.getElementById('playbackTime');
- const playbackIndicator = document.getElementById('playbackIndicator');
+ // DOM
+ const dom = {
+ timeline: document.getElementById('timeline'),
+ timelineContent: document.getElementById('timelineContent'),
+ fileInput: document.getElementById('fileInput'),
+ saveBtn: document.getElementById('saveBtn'),
+ audioInput: document.getElementById('audioInput'),
+ clearAudioBtn: document.getElementById('clearAudioBtn'),
+ waveformCanvas: document.getElementById('waveformCanvas'),
+ waveformContainer: document.getElementById('waveformContainer'),
+ addSequenceBtn: document.getElementById('addSequenceBtn'),
+ deleteBtn: document.getElementById('deleteBtn'),
+ reorderBtn: document.getElementById('reorderBtn'),
+ propertiesPanel: document.getElementById('propertiesPanel'),
+ propertiesContent: document.getElementById('propertiesContent'),
+ messageArea: document.getElementById('messageArea'),
+ zoomSlider: document.getElementById('zoomSlider'),
+ zoomLevel: document.getElementById('zoomLevel'),
+ stats: document.getElementById('stats'),
+ playbackControls: document.getElementById('playbackControls'),
+ playPauseBtn: document.getElementById('playPauseBtn'),
+ playbackTime: document.getElementById('playbackTime'),
+ playbackIndicator: document.getElementById('playbackIndicator'),
+ waveformPlaybackIndicator: document.getElementById('waveformPlaybackIndicator'),
+ panelToggle: document.getElementById('panelToggle'),
+ panelCollapseBtn: document.getElementById('panelCollapseBtn'),
+ bpmSlider: document.getElementById('bpmSlider'),
+ currentBPM: document.getElementById('currentBPM'),
+ showBeatsCheckbox: document.getElementById('showBeatsCheckbox'),
+ quantizeSelect: document.getElementById('quantizeSelect')
+ };
- // Parser: timeline.seq → JavaScript objects
- // Format specification: doc/SEQUENCE.md
+ // Parser
function parseSeqFile(content) {
const sequences = [];
- const lines = content.split('\n');
- let currentSequence = null;
- let bpm = 120; // Default BPM
- let currentPriority = 0; // Track priority for + = - modifiers
+ let currentSequence = null, bpm = 120, currentPriority = 0;
- // Helper: Parse time notation (returns beats)
- function parseTime(timeStr) {
- if (timeStr.endsWith('s')) {
- // Explicit seconds: "2.5s" = convert to beats
- const seconds = parseFloat(timeStr.slice(0, -1));
- return seconds * bpm / 60.0;
- }
- if (timeStr.endsWith('b')) {
- // Explicit beats: "4b" = 4 beats
- return parseFloat(timeStr.slice(0, -1));
- }
- // Default: beats
+ const parseTime = (timeStr) => {
+ if (timeStr.endsWith('s')) return parseFloat(timeStr.slice(0, -1)) * bpm / 60.0;
+ if (timeStr.endsWith('b')) return parseFloat(timeStr.slice(0, -1));
return parseFloat(timeStr);
- }
+ };
- // Helper: Strip inline comments
- function stripComment(line) {
- const commentIdx = line.indexOf('#');
- if (commentIdx >= 0) {
- return line.slice(0, commentIdx).trim();
- }
- return line;
- }
+ const stripComment = (line) => {
+ const idx = line.indexOf('#');
+ return idx >= 0 ? line.slice(0, idx).trim() : line;
+ };
- for (let line of lines) {
+ for (let line of content.split('\n')) {
line = line.trim();
-
- // Skip empty lines
- if (!line) continue;
-
- // Parse BPM comment
- if (line.startsWith('# BPM ')) {
- const bpmMatch = line.match(/# BPM (\d+)/);
- if (bpmMatch) {
- bpm = parseInt(bpmMatch[1]);
+ if (!line || line.startsWith('#')) {
+ if (line.startsWith('# BPM ')) {
+ const m = line.match(/# BPM (\d+)/);
+ if (m) bpm = parseInt(m[1]);
}
continue;
}
-
- // Skip other comments
- if (line.startsWith('#')) continue;
-
- // Strip inline comments
line = stripComment(line);
if (!line) continue;
- // Parse SEQUENCE line: SEQUENCE <time> <priority> [name] [end]
const seqMatch = line.match(/^SEQUENCE\s+(\S+)\s+(\d+)(?:\s+"([^"]+)")?(?:\s+(\S+))?$/);
if (seqMatch) {
- currentSequence = {
- type: 'sequence',
- startTime: parseTime(seqMatch[1]),
- priority: parseInt(seqMatch[2]),
- effects: [],
- name: seqMatch[3] || '',
- _collapsed: true
- };
+ currentSequence = { type: 'sequence', startTime: parseTime(seqMatch[1]), priority: parseInt(seqMatch[2]), effects: [], name: seqMatch[3] || '', _collapsed: true };
sequences.push(currentSequence);
- currentPriority = -1; // Reset effect priority for new sequence
+ currentPriority = -1;
continue;
}
- // Parse EFFECT line: EFFECT <modifier> <ClassName> <start> <end> [args]
const effectMatch = line.match(/^EFFECT\s+([+=-])\s+(\w+)\s+(\S+)\s+(\S+)(?:\s+(.*))?$/);
if (effectMatch && currentSequence) {
const modifier = effectMatch[1];
-
- // Update priority based on modifier
- if (modifier === '+') {
- currentPriority++;
- } else if (modifier === '-') {
- currentPriority--;
- }
- // '=' keeps current priority
-
- const effect = {
- type: 'effect',
- className: effectMatch[2],
- startTime: parseTime(effectMatch[3]),
- endTime: parseTime(effectMatch[4]),
- priority: currentPriority,
- priorityModifier: modifier,
- args: effectMatch[5] || ''
- };
- currentSequence.effects.push(effect);
+ if (modifier === '+') currentPriority++;
+ else if (modifier === '-') currentPriority--;
+ currentSequence.effects.push({
+ type: 'effect', className: effectMatch[2],
+ startTime: parseTime(effectMatch[3]), endTime: parseTime(effectMatch[4]),
+ priority: currentPriority, priorityModifier: modifier, args: effectMatch[5] || ''
+ });
}
}
-
return { sequences, bpm };
}
- // Serializer: JavaScript objects → timeline.seq (outputs beats)
function serializeSeqFile(sequences) {
- let output = '# Demo Timeline\n';
- output += '# Generated by Timeline Editor\n';
- output += `# BPM ${bpm}\n\n`;
-
+ let output = `# Demo Timeline\n# Generated by Timeline Editor\n# BPM ${state.bpm}\n\n`;
for (const seq of sequences) {
const seqLine = `SEQUENCE ${seq.startTime.toFixed(2)} ${seq.priority}`;
output += seq.name ? `${seqLine} "${seq.name}"\n` : `${seqLine}\n`;
-
for (const effect of seq.effects) {
const modifier = effect.priorityModifier || '+';
output += ` EFFECT ${modifier} ${effect.className} ${effect.startTime.toFixed(2)} ${effect.endTime.toFixed(2)}`;
if (effect.args) {
- // Strip priority comments from args
const cleanArgs = effect.args.replace(/\s*#\s*Priority:\s*\d+/i, '').trim();
- if (cleanArgs) {
- output += ` ${cleanArgs}`;
- }
+ if (cleanArgs) output += ` ${cleanArgs}`;
}
output += '\n';
}
output += '\n';
}
-
return output;
}
- // Audio waveform visualization
+ // Audio
async function loadAudioFile(file) {
try {
const arrayBuffer = await file.arrayBuffer();
- if (!audioContext) {
- audioContext = new (window.AudioContext || window.webkitAudioContext)();
- }
- audioBuffer = await audioContext.decodeAudioData(arrayBuffer);
- audioDuration = audioBuffer.duration;
-
+ if (!state.audioContext) state.audioContext = new (window.AudioContext || window.webkitAudioContext)();
+ state.audioBuffer = await state.audioContext.decodeAudioData(arrayBuffer);
+ state.audioDuration = state.audioBuffer.duration;
renderWaveform();
- waveformCanvas.style.display = 'block';
- playbackControls.style.display = 'flex';
- clearAudioBtn.disabled = false;
- showMessage(`Audio loaded: ${audioDuration.toFixed(2)}s`, 'success');
-
- // Extend timeline if audio is longer than current max time
+ dom.waveformContainer.style.display = 'block';
+ dom.playbackControls.style.display = 'flex';
+ dom.clearAudioBtn.disabled = false;
+ showMessage(`Audio loaded: ${state.audioDuration.toFixed(2)}s`, 'success');
renderTimeline();
} catch (err) {
showMessage(`Error loading audio: ${err.message}`, 'error');
@@ -753,995 +302,546 @@
}
function renderWaveform() {
- if (!audioBuffer) return;
-
- const canvas = waveformCanvas;
- const ctx = canvas.getContext('2d');
-
- // Set canvas size based on audio duration (convert to beats) and zoom
- const audioDurationBeats = audioDuration * bpm / 60.0;
- const canvasWidth = audioDurationBeats * pixelsPerSecond;
- const canvasHeight = 80;
-
- // Set actual canvas resolution (for sharp rendering)
- canvas.width = canvasWidth;
- canvas.height = canvasHeight;
-
- // Set CSS size to match
- canvas.style.width = `${canvasWidth}px`;
- canvas.style.height = `${canvasHeight}px`;
-
- // Clear canvas
- ctx.fillStyle = 'rgba(0, 0, 0, 0.3)';
- ctx.fillRect(0, 0, canvasWidth, canvasHeight);
-
- // Get audio data (use first channel for mono, or mix for stereo)
- const channelData = audioBuffer.getChannelData(0);
- const sampleRate = audioBuffer.sampleRate;
+ if (!state.audioBuffer) return;
+ const canvas = dom.waveformCanvas, ctx = canvas.getContext('2d');
+ const audioDurationBeats = state.audioDuration * state.bpm / 60.0;
+ const canvasWidth = audioDurationBeats * state.pixelsPerSecond, canvasHeight = 80;
+ canvas.width = canvasWidth; canvas.height = canvasHeight;
+ canvas.style.width = `${canvasWidth}px`; canvas.style.height = `${canvasHeight}px`;
+ dom.waveformPlaybackIndicator.style.height = `${canvasHeight}px`;
+ ctx.fillStyle = 'rgba(0, 0, 0, 0.3)'; ctx.fillRect(0, 0, canvasWidth, canvasHeight);
+ const channelData = state.audioBuffer.getChannelData(0);
const samplesPerPixel = Math.ceil(channelData.length / canvasWidth);
-
- // Draw waveform
- ctx.strokeStyle = '#4ec9b0';
- ctx.lineWidth = 1;
- ctx.beginPath();
-
- const centerY = canvasHeight / 2;
- const amplitudeScale = canvasHeight * 0.4; // Use 80% of height
-
+ ctx.strokeStyle = '#4ec9b0'; ctx.lineWidth = 1; ctx.beginPath();
+ const centerY = canvasHeight / 2, amplitudeScale = canvasHeight * 0.4;
for (let x = 0; x < canvasWidth; x++) {
const startSample = Math.floor(x * samplesPerPixel);
const endSample = Math.min(startSample + samplesPerPixel, channelData.length);
-
- // Find min and max amplitude in this pixel range (for better visualization)
- let min = 1.0;
- let max = -1.0;
+ let min = 1.0, max = -1.0;
for (let i = startSample; i < endSample; i++) {
const sample = channelData[i];
if (sample < min) min = sample;
if (sample > max) max = sample;
}
-
- // Draw vertical line from min to max
- const yMin = centerY - min * amplitudeScale;
- const yMax = centerY - max * amplitudeScale;
-
- if (x === 0) {
- ctx.moveTo(x, yMin);
- } else {
- ctx.lineTo(x, yMin);
- }
+ const yMin = centerY - min * amplitudeScale, yMax = centerY - max * amplitudeScale;
+ if (x === 0) ctx.moveTo(x, yMin); else ctx.lineTo(x, yMin);
ctx.lineTo(x, yMax);
}
-
- ctx.stroke();
-
- // Draw center line
- ctx.strokeStyle = 'rgba(255, 255, 255, 0.1)';
- ctx.lineWidth = 1;
- ctx.beginPath();
- ctx.moveTo(0, centerY);
- ctx.lineTo(canvasWidth, centerY);
ctx.stroke();
+ ctx.strokeStyle = 'rgba(255, 255, 255, 0.1)'; ctx.lineWidth = 1; ctx.beginPath();
+ ctx.moveTo(0, centerY); ctx.lineTo(canvasWidth, centerY); ctx.stroke();
}
function clearAudio() {
- stopPlayback();
- audioBuffer = null;
- audioDuration = 0;
- waveformCanvas.style.display = 'none';
- playbackControls.style.display = 'none';
- clearAudioBtn.disabled = true;
- renderTimeline();
- showMessage('Audio cleared', 'success');
+ stopPlayback(); state.audioBuffer = null; state.audioDuration = 0;
+ dom.waveformContainer.style.display = 'none'; dom.playbackControls.style.display = 'none';
+ dom.clearAudioBtn.disabled = true; renderTimeline(); showMessage('Audio cleared', 'success');
}
- // Playback functions
- function startPlayback() {
- if (!audioBuffer || !audioContext) return;
-
- // Resume audio context if suspended
- if (audioContext.state === 'suspended') {
- audioContext.resume();
+ async function startPlayback() {
+ if (!state.audioBuffer || !state.audioContext) return;
+ if (state.audioSource) try { state.audioSource.stop(); } catch (e) {} state.audioSource = null;
+ if (state.audioContext.state === 'suspended') await state.audioContext.resume();
+ try {
+ state.audioSource = state.audioContext.createBufferSource();
+ state.audioSource.buffer = state.audioBuffer;
+ state.audioSource.connect(state.audioContext.destination);
+ state.audioSource.start(0, state.playbackOffset);
+ state.playbackStartTime = state.audioContext.currentTime;
+ state.isPlaying = true; dom.playPauseBtn.textContent = '⏸ Pause';
+ updatePlaybackPosition();
+ state.audioSource.onended = () => { if (state.isPlaying) stopPlayback(); };
+ } catch (e) {
+ console.error('Failed to start playback:', e); showMessage('Playback failed: ' + e.message, 'error');
+ state.audioSource = null; state.isPlaying = false;
}
-
- // Create and start audio source
- audioSource = audioContext.createBufferSource();
- audioSource.buffer = audioBuffer;
- audioSource.connect(audioContext.destination);
- audioSource.start(0, playbackOffset);
-
- playbackStartTime = audioContext.currentTime;
- isPlaying = true;
- playPauseBtn.textContent = '⏸ Pause';
- playbackIndicator.classList.add('playing');
-
- // Start animation loop
- updatePlaybackPosition();
-
- audioSource.onended = () => {
- if (isPlaying) {
- stopPlayback();
- }
- };
}
- function stopPlayback() {
- if (audioSource) {
- try {
- audioSource.stop();
- } catch (e) {
- // Already stopped
- }
- audioSource = null;
- }
-
- if (animationFrameId) {
- cancelAnimationFrame(animationFrameId);
- animationFrameId = null;
+ function stopPlayback(savePosition = true) {
+ if (state.audioSource) try { state.audioSource.stop(); } catch (e) {} state.audioSource = null;
+ if (state.animationFrameId) { cancelAnimationFrame(state.animationFrameId); state.animationFrameId = null; }
+ if (state.isPlaying && savePosition) {
+ const elapsed = state.audioContext.currentTime - state.playbackStartTime;
+ state.playbackOffset = Math.min(state.playbackOffset + elapsed, state.audioDuration);
}
-
- if (isPlaying) {
- // Save current position for resume
- const elapsed = audioContext.currentTime - playbackStartTime;
- playbackOffset = Math.min(playbackOffset + elapsed, audioDuration);
- }
-
- isPlaying = false;
- playPauseBtn.textContent = '▶ Play';
- playbackIndicator.classList.remove('playing');
+ state.isPlaying = false; dom.playPauseBtn.textContent = '▶ Play';
}
function updatePlaybackPosition() {
- if (!isPlaying) return;
-
- const elapsed = audioContext.currentTime - playbackStartTime;
- const currentTime = playbackOffset + elapsed;
-
- // Update time display
- playbackTime.textContent = `${currentTime.toFixed(2)}s`;
-
- // Convert to beats for position calculation
- const currentBeats = currentTime * bpm / 60.0;
-
- // Update playback indicator position
- const indicatorX = currentBeats * pixelsPerSecond;
- playbackIndicator.style.left = `${indicatorX}px`;
-
- // Auto-scroll timeline to follow playback
- const viewportWidth = timelineContent.clientWidth;
- const scrollX = timelineContent.scrollLeft;
- const relativeX = indicatorX - scrollX;
-
- // Keep indicator in middle third of viewport
- if (relativeX < viewportWidth * 0.33 || relativeX > viewportWidth * 0.67) {
- timelineContent.scrollLeft = indicatorX - viewportWidth * 0.5;
+ if (!state.isPlaying) return;
+ const elapsed = state.audioContext.currentTime - state.playbackStartTime;
+ const currentTime = state.playbackOffset + elapsed, currentBeats = currentTime * state.bpm / 60.0;
+ dom.playbackTime.textContent = `${currentTime.toFixed(2)}s (${currentBeats.toFixed(2)}b)`;
+ const indicatorX = currentBeats * state.pixelsPerSecond;
+ dom.playbackIndicator.style.left = `${indicatorX}px`;
+ dom.waveformPlaybackIndicator.style.left = `${indicatorX}px`;
+ const viewportWidth = dom.timelineContent.clientWidth;
+ const targetScrollX = indicatorX - viewportWidth * 0.4;
+ const currentScrollX = dom.timelineContent.scrollLeft;
+ const scrollDiff = targetScrollX - currentScrollX;
+ if (Math.abs(scrollDiff) > 5) {
+ dom.timelineContent.scrollLeft += scrollDiff * 0.1;
}
-
- // Auto-expand/collapse sequences
expandSequenceAtTime(currentBeats);
-
- // Continue animation
- animationFrameId = requestAnimationFrame(updatePlaybackPosition);
+ state.animationFrameId = requestAnimationFrame(updatePlaybackPosition);
}
function expandSequenceAtTime(currentBeats) {
- // Find which sequence is active at current time
let activeSeqIndex = -1;
- for (let i = 0; i < sequences.length; i++) {
- const seq = sequences[i];
- const seqEndBeats = seq.startTime + (seq.effects.length > 0
- ? Math.max(...seq.effects.map(e => e.endTime))
- : 0);
-
- if (currentBeats >= seq.startTime && currentBeats <= seqEndBeats) {
- activeSeqIndex = i;
- break;
- }
+ for (let i = 0; i < state.sequences.length; i++) {
+ const seq = state.sequences[i];
+ const seqEndBeats = seq.startTime + (seq.effects.length > 0 ? Math.max(...seq.effects.map(e => e.endTime)) : 0);
+ if (currentBeats >= seq.startTime && currentBeats <= seqEndBeats) { activeSeqIndex = i; break; }
}
-
- // Changed sequence - collapse old, expand new
- if (activeSeqIndex !== lastExpandedSeqIndex) {
- // Collapse previous sequence
- if (lastExpandedSeqIndex >= 0 && lastExpandedSeqIndex < sequences.length) {
- sequences[lastExpandedSeqIndex]._collapsed = true;
+ if (activeSeqIndex !== state.lastExpandedSeqIndex) {
+ const seqDivs = dom.timeline.querySelectorAll('.sequence');
+ if (state.lastExpandedSeqIndex >= 0 && seqDivs[state.lastExpandedSeqIndex]) {
+ seqDivs[state.lastExpandedSeqIndex].classList.remove('active-playing');
}
-
- // Expand new sequence
- if (activeSeqIndex >= 0) {
- sequences[activeSeqIndex]._collapsed = false;
- lastExpandedSeqIndex = activeSeqIndex;
-
- // Flash animation
- const seqDivs = timeline.querySelectorAll('.sequence');
- if (seqDivs[activeSeqIndex]) {
- seqDivs[activeSeqIndex].classList.add('active-flash');
- setTimeout(() => {
- seqDivs[activeSeqIndex]?.classList.remove('active-flash');
- }, 600);
- }
+ if (activeSeqIndex >= 0 && seqDivs[activeSeqIndex]) {
+ seqDivs[activeSeqIndex].classList.add('active-playing');
}
-
- // Re-render to show collapse/expand changes
- renderTimeline();
+ state.lastExpandedSeqIndex = activeSeqIndex;
}
}
- // Render timeline
+ // Render
function renderTimeline() {
- timeline.innerHTML = '';
- const timeMarkers = document.getElementById('timeMarkers');
- timeMarkers.innerHTML = '';
-
- // Calculate max time (in beats)
- let maxTime = 60; // Default 60 beats (15 bars)
- for (const seq of sequences) {
- const seqEnd = seq.startTime + 16; // Default 4 bars
- maxTime = Math.max(maxTime, seqEnd);
-
- for (const effect of seq.effects) {
- maxTime = Math.max(maxTime, seq.startTime + effect.endTime);
- }
- }
-
- // Extend timeline to fit audio if loaded
- if (audioDuration > 0) {
- const audioBeats = audioDuration * bpm / 60.0;
- maxTime = Math.max(maxTime, audioBeats);
+ dom.timeline.innerHTML = ''; document.getElementById('timeMarkers').innerHTML = '';
+ let maxTime = 60;
+ for (const seq of state.sequences) {
+ maxTime = Math.max(maxTime, seq.startTime + 16);
+ for (const effect of seq.effects) maxTime = Math.max(maxTime, seq.startTime + effect.endTime);
}
-
- // Render time markers
- const timelineWidth = maxTime * pixelsPerSecond;
- timeline.style.width = `${timelineWidth}px`;
-
- if (showBeats) {
- // Show beats (default)
+ if (state.audioDuration > 0) maxTime = Math.max(maxTime, state.audioDuration * state.bpm / 60.0);
+ const timelineWidth = maxTime * state.pixelsPerSecond;
+ dom.timeline.style.width = `${timelineWidth}px`;
+ let totalTimelineHeight = 0;
+ const timeMarkers = document.getElementById('timeMarkers');
+ if (state.showBeats) {
for (let beat = 0; beat <= maxTime; beat += 4) {
const marker = document.createElement('div');
- marker.className = 'time-marker';
- marker.style.left = `${beat * pixelsPerSecond}px`;
- marker.textContent = `${beat}b`;
- timeMarkers.appendChild(marker);
+ marker.className = 'time-marker'; marker.style.left = `${beat * state.pixelsPerSecond}px`;
+ marker.textContent = `${beat}b`; timeMarkers.appendChild(marker);
}
} else {
- // Show seconds
- const maxSeconds = maxTime * 60.0 / bpm;
+ const maxSeconds = maxTime * 60.0 / state.bpm;
for (let t = 0; t <= maxSeconds; t += 1) {
- const beatPos = t * bpm / 60.0;
- const marker = document.createElement('div');
- marker.className = 'time-marker';
- marker.style.left = `${beatPos * pixelsPerSecond}px`;
- marker.textContent = `${t}s`;
- timeMarkers.appendChild(marker);
+ const beatPos = t * state.bpm / 60.0, marker = document.createElement('div');
+ marker.className = 'time-marker'; marker.style.left = `${beatPos * state.pixelsPerSecond}px`;
+ marker.textContent = `${t}s`; timeMarkers.appendChild(marker);
}
}
-
- // Render sequences (with dynamic Y positioning to prevent overlap)
- let cumulativeY = 0;
- const sequenceGap = 10; // Gap between sequences
-
- sequences.forEach((seq, seqIndex) => {
+ let cumulativeY = 0, sequenceGap = 10;
+ state.sequences.forEach((seq, seqIndex) => {
const seqDiv = document.createElement('div');
- seqDiv.className = 'sequence';
- seqDiv.dataset.index = seqIndex;
-
- // Calculate sequence bounds based on effects (dynamic start/end)
- let seqVisualStart = seq.startTime;
- let seqVisualEnd = seq.startTime + 10; // Default 10s duration
-
+ seqDiv.className = 'sequence'; seqDiv.dataset.index = seqIndex;
+ let seqVisualStart = seq.startTime, seqVisualEnd = seq.startTime + 10;
if (seq.effects.length > 0) {
- const minEffectStart = Math.min(...seq.effects.map(e => e.startTime));
- const maxEffectEnd = Math.max(...seq.effects.map(e => e.endTime));
- seqVisualStart = seq.startTime + minEffectStart;
- seqVisualEnd = seq.startTime + maxEffectEnd;
+ seqVisualStart = seq.startTime + Math.min(...seq.effects.map(e => e.startTime));
+ seqVisualEnd = seq.startTime + Math.max(...seq.effects.map(e => e.endTime));
}
-
- const seqVisualWidth = seqVisualEnd - seqVisualStart;
-
- // Initialize collapsed state if undefined
- if (seq._collapsed === undefined) {
- seq._collapsed = false;
- }
-
- // Calculate sequence height based on number of effects (stacked vertically)
- const numEffects = seq.effects.length;
- const effectSpacing = 30;
+ if (seq._collapsed === undefined) seq._collapsed = false;
+ const numEffects = seq.effects.length, effectSpacing = 30;
const fullHeight = Math.max(70, 20 + numEffects * effectSpacing + 5);
const seqHeight = seq._collapsed ? 35 : fullHeight;
-
- seqDiv.style.left = `${seqVisualStart * pixelsPerSecond}px`;
+ seqDiv.style.left = `${seqVisualStart * state.pixelsPerSecond}px`;
seqDiv.style.top = `${cumulativeY}px`;
- seqDiv.style.width = `${seqVisualWidth * pixelsPerSecond}px`;
- seqDiv.style.height = `${seqHeight}px`;
- seqDiv.style.minHeight = `${seqHeight}px`;
- seqDiv.style.maxHeight = `${seqHeight}px`;
-
- // Store Y position for this sequence (used by effects and scroll)
- seq._yPosition = cumulativeY;
- cumulativeY += seqHeight + sequenceGap;
-
- // Create sequence header (double-click to collapse)
- const seqHeaderDiv = document.createElement('div');
- seqHeaderDiv.className = 'sequence-header';
-
- const headerName = document.createElement('span');
- headerName.className = 'sequence-header-name';
+ seqDiv.style.width = `${(seqVisualEnd - seqVisualStart) * state.pixelsPerSecond}px`;
+ seqDiv.style.height = `${seqHeight}px`; seqDiv.style.minHeight = `${seqHeight}px`; seqDiv.style.maxHeight = `${seqHeight}px`;
+ seq._yPosition = cumulativeY; cumulativeY += seqHeight + sequenceGap; totalTimelineHeight = cumulativeY;
+ const seqHeaderDiv = document.createElement('div'); seqHeaderDiv.className = 'sequence-header';
+ const headerName = document.createElement('span'); headerName.className = 'sequence-header-name';
headerName.textContent = seq.name || `Sequence ${seqIndex + 1}`;
-
seqHeaderDiv.appendChild(headerName);
-
- // Prevent drag on header
- seqHeaderDiv.addEventListener('mousedown', (e) => {
- e.stopPropagation();
- });
-
- // Double-click to toggle collapse
- seqHeaderDiv.addEventListener('dblclick', (e) => {
- e.stopPropagation();
- e.preventDefault();
- seq._collapsed = !seq._collapsed;
- renderTimeline();
- });
-
+ seqHeaderDiv.addEventListener('dblclick', e => { e.stopPropagation(); e.preventDefault(); seq._collapsed = !seq._collapsed; renderTimeline(); });
seqDiv.appendChild(seqHeaderDiv);
-
- // Create sequence name overlay (large, centered, fades on hover)
- const seqNameDiv = document.createElement('div');
- seqNameDiv.className = 'sequence-name';
- seqNameDiv.textContent = seq.name || `Sequence ${seqIndex + 1}`;
-
- seqDiv.appendChild(seqNameDiv);
-
- // Apply collapsed state
- if (seq._collapsed) {
- seqDiv.classList.add('collapsed');
- }
-
- if (selectedItem && selectedItem.type === 'sequence' && selectedItem.index === seqIndex) {
- seqDiv.classList.add('selected');
- }
-
- // Fade name on hover
- seqDiv.addEventListener('mouseenter', () => {
- seqDiv.classList.add('hovered');
- });
- seqDiv.addEventListener('mouseleave', () => {
- seqDiv.classList.remove('hovered');
- });
-
- seqDiv.addEventListener('mousedown', (e) => startDrag(e, 'sequence', seqIndex));
- seqDiv.addEventListener('click', (e) => {
- e.stopPropagation();
- selectItem('sequence', seqIndex);
- });
-
- timeline.appendChild(seqDiv);
-
- // Render effects within sequence (skip if collapsed)
+ const seqNameDiv = document.createElement('div'); seqNameDiv.className = 'sequence-name';
+ seqNameDiv.textContent = seq.name || `Sequence ${seqIndex + 1}`; seqDiv.appendChild(seqNameDiv);
+ if (seq._collapsed) seqDiv.classList.add('collapsed');
+ if (state.selectedItem && state.selectedItem.type === 'sequence' && state.selectedItem.index === seqIndex) seqDiv.classList.add('selected');
+ seqDiv.addEventListener('mouseenter', () => seqDiv.classList.add('hovered'));
+ seqDiv.addEventListener('mouseleave', () => seqDiv.classList.remove('hovered'));
+ seqDiv.addEventListener('mousedown', e => startDrag(e, 'sequence', seqIndex));
+ seqDiv.addEventListener('click', e => { e.stopPropagation(); selectItem('sequence', seqIndex); });
+ seqDiv.addEventListener('dblclick', e => { e.stopPropagation(); e.preventDefault(); seq._collapsed = !seq._collapsed; renderTimeline(); });
+ dom.timeline.appendChild(seqDiv);
if (!seq._collapsed) {
- seq.effects.forEach((effect, effectIndex) => {
- const effectDiv = document.createElement('div');
- effectDiv.className = 'effect';
- effectDiv.dataset.seqIndex = seqIndex;
- effectDiv.dataset.effectIndex = effectIndex;
-
- const effectStart = (seq.startTime + effect.startTime) * pixelsPerSecond;
- const effectWidth = (effect.endTime - effect.startTime) * pixelsPerSecond;
-
- effectDiv.style.left = `${effectStart}px`;
- effectDiv.style.top = `${seq._yPosition + 20 + effectIndex * 30}px`;
- effectDiv.style.width = `${effectWidth}px`;
- effectDiv.style.height = '26px';
-
- // Format time display (beats primary, seconds in tooltip)
- const startBeat = effect.startTime.toFixed(1);
- const endBeat = effect.endTime.toFixed(1);
- const startSec = (effect.startTime * 60.0 / bpm).toFixed(1);
- const endSec = (effect.endTime * 60.0 / bpm).toFixed(1);
- const timeDisplay = showBeats
- ? `${startBeat}-${endBeat}b (${startSec}-${endSec}s)`
- : `${startSec}-${endSec}s (${startBeat}-${endBeat}b)`;
-
- // Show only class name, full info on hover
- effectDiv.innerHTML = `
- <div class="effect-handle left"></div>
- <small>${effect.className}</small>
- <div class="effect-handle right"></div>
- `;
- effectDiv.title = `${effect.className}\n${timeDisplay}\nPriority: ${effect.priority}\n${effect.args || '(no args)'}`;
-
- if (selectedItem && selectedItem.type === 'effect' &&
- selectedItem.seqIndex === seqIndex && selectedItem.effectIndex === effectIndex) {
- effectDiv.classList.add('selected');
- }
-
- // Handle resizing (only for selected effects)
- const leftHandle = effectDiv.querySelector('.effect-handle.left');
- const rightHandle = effectDiv.querySelector('.effect-handle.right');
-
- leftHandle.addEventListener('mousedown', (e) => {
- e.stopPropagation();
- startHandleDrag(e, 'left', seqIndex, effectIndex);
- });
-
- rightHandle.addEventListener('mousedown', (e) => {
- e.stopPropagation();
- startHandleDrag(e, 'right', seqIndex, effectIndex);
+ seq.effects.forEach((effect, effectIndex) => {
+ const effectDiv = document.createElement('div'); effectDiv.className = 'effect';
+ effectDiv.dataset.seqIndex = seqIndex; effectDiv.dataset.effectIndex = effectIndex;
+ const effectStart = (seq.startTime + effect.startTime) * state.pixelsPerSecond;
+ const effectWidth = (effect.endTime - effect.startTime) * state.pixelsPerSecond;
+ effectDiv.style.left = `${effectStart}px`; effectDiv.style.top = `${seq._yPosition + 20 + effectIndex * 30}px`;
+ effectDiv.style.width = `${effectWidth}px`; effectDiv.style.height = '26px';
+ const startBeat = effect.startTime.toFixed(1), endBeat = effect.endTime.toFixed(1);
+ const startSec = (effect.startTime * 60.0 / state.bpm).toFixed(1), endSec = (effect.endTime * 60.0 / state.bpm).toFixed(1);
+ const timeDisplay = state.showBeats ? `${startBeat}-${endBeat}b (${startSec}-${endSec}s)` : `${startSec}-${endSec}s (${startBeat}-${endBeat}b)`;
+ effectDiv.innerHTML = `<div class="effect-handle left"></div><small>${effect.className}</small><div class="effect-handle right"></div>`;
+ effectDiv.title = `${effect.className}\n${timeDisplay}\nPriority: ${effect.priority}\n${effect.args || '(no args)'}`;
+ if (state.selectedItem && state.selectedItem.type === 'effect' && state.selectedItem.seqIndex === seqIndex && state.selectedItem.effectIndex === effectIndex) effectDiv.classList.add('selected');
+ const leftHandle = effectDiv.querySelector('.effect-handle.left');
+ const rightHandle = effectDiv.querySelector('.effect-handle.right');
+ leftHandle.addEventListener('mousedown', e => { e.stopPropagation(); startHandleDrag(e, 'left', seqIndex, effectIndex); });
+ rightHandle.addEventListener('mousedown', e => { e.stopPropagation(); startHandleDrag(e, 'right', seqIndex, effectIndex); });
+ effectDiv.addEventListener('mousedown', e => { if (!e.target.classList.contains('effect-handle')) { e.stopPropagation(); startDrag(e, 'effect', seqIndex, effectIndex); } });
+ effectDiv.addEventListener('click', e => { e.stopPropagation(); selectItem('effect', seqIndex, effectIndex); });
+ dom.timeline.appendChild(effectDiv);
});
-
- effectDiv.addEventListener('mousedown', (e) => {
- // Only drag if not clicking on a handle
- if (!e.target.classList.contains('effect-handle')) {
- e.stopPropagation();
- startDrag(e, 'effect', seqIndex, effectIndex);
- }
- });
- effectDiv.addEventListener('click', (e) => {
- e.stopPropagation();
- selectItem('effect', seqIndex, effectIndex);
- });
-
- timeline.appendChild(effectDiv);
- });
}
});
-
+ dom.timeline.style.minHeight = `${Math.max(totalTimelineHeight, dom.timelineContent.offsetHeight)}px`;
+ if (dom.playbackIndicator) dom.playbackIndicator.style.height = `${Math.max(totalTimelineHeight, dom.timelineContent.offsetHeight)}px`;
updateStats();
}
- // Drag handling
+ // Drag
function startDrag(e, type, seqIndex, effectIndex = null) {
- e.preventDefault();
- isDragging = true;
-
- // Calculate offset from timeline origin (not from element edge)
- // CRITICAL: Use currentTarget (element with listener) not target (what was clicked)
- const timelineRect = timeline.getBoundingClientRect();
+ state.isDragging = true;
+ state.dragMoved = false;
+ const timelineRect = dom.timeline.getBoundingClientRect();
const currentLeft = parseFloat(e.currentTarget.style.left) || 0;
- dragOffset.x = e.clientX - timelineRect.left - currentLeft;
- dragOffset.y = e.clientY - e.currentTarget.getBoundingClientRect().top;
-
- selectedItem = { type, index: seqIndex, seqIndex, effectIndex };
- renderTimeline();
- updateProperties();
-
- document.addEventListener('mousemove', onDrag);
- document.addEventListener('mouseup', stopDrag);
+ state.dragOffset.x = e.clientX - timelineRect.left + dom.timelineContent.scrollLeft - currentLeft;
+ state.dragOffset.y = e.clientY - e.currentTarget.getBoundingClientRect().top;
+ state.selectedItem = { type, index: seqIndex, seqIndex, effectIndex };
+ document.addEventListener('mousemove', onDrag); document.addEventListener('mouseup', stopDrag);
}
function onDrag(e) {
- if (!isDragging || !selectedItem) return;
-
- const timelineRect = timeline.getBoundingClientRect();
- const newX = e.clientX - timelineRect.left - dragOffset.x;
- let newTime = Math.max(0, newX / pixelsPerSecond);
-
- // Snap to beat when enabled
- if (showBeats) {
- newTime = Math.round(newTime);
+ if (!state.isDragging || !state.selectedItem) return;
+ state.dragMoved = true;
+ const timelineRect = dom.timeline.getBoundingClientRect();
+ let newTime = Math.max(0, (e.clientX - timelineRect.left + dom.timelineContent.scrollLeft - state.dragOffset.x) / state.pixelsPerSecond);
+ if (state.quantizeUnit > 0) newTime = Math.round(newTime * state.quantizeUnit) / state.quantizeUnit;
+ if (state.selectedItem.type === 'sequence') state.sequences[state.selectedItem.index].startTime = newTime;
+ else if (state.selectedItem.type === 'effect') {
+ const seq = state.sequences[state.selectedItem.seqIndex], effect = seq.effects[state.selectedItem.effectIndex];
+ const duration = effect.endTime - effect.startTime, relativeTime = newTime - seq.startTime;
+ effect.startTime = relativeTime; effect.endTime = effect.startTime + duration;
}
-
- if (selectedItem.type === 'sequence') {
- sequences[selectedItem.index].startTime = Math.round(newTime * 100) / 100;
- } else if (selectedItem.type === 'effect') {
- // Effects have times relative to their parent sequence
- const seq = sequences[selectedItem.seqIndex];
- const effect = seq.effects[selectedItem.effectIndex];
- const duration = effect.endTime - effect.startTime;
-
- // Convert absolute timeline position to relative time within sequence
- const relativeTime = newTime - seq.startTime;
- effect.startTime = Math.round(relativeTime * 100) / 100;
- effect.endTime = effect.startTime + duration;
- }
-
- renderTimeline();
- updateProperties();
+ renderTimeline(); updateProperties();
}
function stopDrag() {
- isDragging = false;
- document.removeEventListener('mousemove', onDrag);
- document.removeEventListener('mouseup', stopDrag);
+ state.isDragging = false;
+ document.removeEventListener('mousemove', onDrag); document.removeEventListener('mouseup', stopDrag);
+ if (state.dragMoved) {
+ renderTimeline(); updateProperties();
+ }
}
- // Handle dragging (for resizing effects)
function startHandleDrag(e, type, seqIndex, effectIndex) {
- e.preventDefault();
- isDraggingHandle = true;
- handleType = type;
- selectedItem = { type: 'effect', seqIndex, effectIndex, index: seqIndex };
- renderTimeline();
- updateProperties();
-
- document.addEventListener('mousemove', onHandleDrag);
- document.addEventListener('mouseup', stopHandleDrag);
+ e.preventDefault(); state.isDraggingHandle = true; state.handleType = type;
+ state.selectedItem = { type: 'effect', seqIndex, effectIndex, index: seqIndex };
+ document.addEventListener('mousemove', onHandleDrag); document.addEventListener('mouseup', stopHandleDrag);
}
function onHandleDrag(e) {
- if (!isDraggingHandle || !selectedItem) return;
-
- const timelineRect = timeline.getBoundingClientRect();
- const newX = e.clientX - timelineRect.left;
- let newTime = Math.max(0, newX / pixelsPerSecond);
-
- // Snap to beat when enabled
- if (showBeats) {
- newTime = Math.round(newTime);
- }
-
- const seq = sequences[selectedItem.seqIndex];
- const effect = seq.effects[selectedItem.effectIndex];
-
- // Convert to relative time
+ if (!state.isDraggingHandle || !state.selectedItem) return;
+ const timelineRect = dom.timeline.getBoundingClientRect();
+ let newTime = Math.max(0, (e.clientX - timelineRect.left + dom.timelineContent.scrollLeft) / state.pixelsPerSecond);
+ if (state.quantizeUnit > 0) newTime = Math.round(newTime * state.quantizeUnit) / state.quantizeUnit;
+ const seq = state.sequences[state.selectedItem.seqIndex], effect = seq.effects[state.selectedItem.effectIndex];
const relativeTime = newTime - seq.startTime;
-
- if (handleType === 'left') {
- // Adjust start time, keep end time fixed
- // Allow negative times (effect can extend before sequence start)
- const newStartTime = Math.round(relativeTime * 100) / 100;
- effect.startTime = Math.min(newStartTime, effect.endTime - 0.1);
- } else if (handleType === 'right') {
- // Adjust end time, keep start time fixed
- effect.endTime = Math.max(effect.startTime + 0.1, Math.round(relativeTime * 100) / 100);
- }
-
- renderTimeline();
- updateProperties();
+ if (state.handleType === 'left') effect.startTime = Math.min(relativeTime, effect.endTime - 0.1);
+ else if (state.handleType === 'right') effect.endTime = Math.max(effect.startTime + 0.1, relativeTime);
+ renderTimeline(); updateProperties();
}
function stopHandleDrag() {
- isDraggingHandle = false;
- handleType = null;
- document.removeEventListener('mousemove', onHandleDrag);
- document.removeEventListener('mouseup', stopHandleDrag);
+ state.isDraggingHandle = false; state.handleType = null;
+ document.removeEventListener('mousemove', onHandleDrag); document.removeEventListener('mouseup', stopHandleDrag);
+ renderTimeline(); updateProperties();
}
- // Selection
function selectItem(type, seqIndex, effectIndex = null) {
- selectedItem = { type, index: seqIndex, seqIndex, effectIndex };
- renderTimeline();
- updateProperties();
- deleteBtn.disabled = false;
+ state.selectedItem = { type, index: seqIndex, seqIndex, effectIndex };
+ renderTimeline(); updateProperties(); dom.deleteBtn.disabled = false;
}
- // Properties panel
+ // Properties
function updateProperties() {
- if (!selectedItem) {
- propertiesPanel.style.display = 'none';
- return;
- }
-
- propertiesPanel.style.display = 'block';
-
- if (selectedItem.type === 'sequence') {
- const seq = sequences[selectedItem.index];
- propertiesContent.innerHTML = `
- <div class="property-group">
- <label>Name</label>
- <input type="text" id="propName" value="${seq.name || ''}" placeholder="Sequence name" oninput="autoApplyProperties()">
- </div>
- <div class="property-group">
- <label>Start Time (seconds)</label>
- <input type="number" id="propStartTime" value="${seq.startTime}" step="0.1" min="0" oninput="autoApplyProperties()">
- </div>
+ if (!state.selectedItem) { dom.propertiesPanel.style.display = 'none'; return; }
+ dom.propertiesPanel.style.display = 'block';
+ if (state.selectedItem.type === 'sequence') {
+ const seq = state.sequences[state.selectedItem.index];
+ dom.propertiesContent.innerHTML = `
+ <div class="property-group"><label>Name</label><input type="text" id="propName" value="${seq.name || ''}" placeholder="Sequence name"></div>
+ <div class="property-group"><label>Start Time (seconds)</label><input type="number" id="propStartTime" value="${seq.startTime}" step="0.1" min="0"></div>
`;
- } else if (selectedItem.type === 'effect') {
- const effect = sequences[selectedItem.seqIndex].effects[selectedItem.effectIndex];
- const effects = sequences[selectedItem.seqIndex].effects;
- const canMoveUp = selectedItem.effectIndex < effects.length - 1;
- const canMoveDown = selectedItem.effectIndex > 0;
+ document.getElementById('propName').addEventListener('input', applyProperties);
+ document.getElementById('propStartTime').addEventListener('input', applyProperties);
+ } else if (state.selectedItem.type === 'effect') {
+ const effect = state.sequences[state.selectedItem.seqIndex].effects[state.selectedItem.effectIndex];
+ const effects = state.sequences[state.selectedItem.seqIndex].effects;
+ const canMoveUp = state.selectedItem.effectIndex < effects.length - 1, canMoveDown = state.selectedItem.effectIndex > 0;
const samePriority = effect.priorityModifier === '=';
-
- propertiesContent.innerHTML = `
- <div class="property-group">
- <label>Effect Class</label>
- <input type="text" id="propClassName" value="${effect.className}" oninput="autoApplyProperties()">
- </div>
- <div class="property-group">
- <label>Start Time (relative to sequence)</label>
- <input type="number" id="propStartTime" value="${effect.startTime}" step="0.1" oninput="autoApplyProperties()">
- </div>
- <div class="property-group">
- <label>End Time (relative to sequence)</label>
- <input type="number" id="propEndTime" value="${effect.endTime}" step="0.1" oninput="autoApplyProperties()">
- </div>
- <div class="property-group">
- <label>Constructor Arguments</label>
- <input type="text" id="propArgs" value="${effect.args || ''}" oninput="autoApplyProperties()">
- </div>
- <div class="property-group">
- <label>Stack Position (determines priority)</label>
+ dom.propertiesContent.innerHTML = `
+ <div class="property-group"><label>Effect Class</label><input type="text" id="propClassName" value="${effect.className}"></div>
+ <div class="property-group"><label>Start Time (relative to sequence)</label><input type="number" id="propStartTime" value="${effect.startTime}" step="0.1"></div>
+ <div class="property-group"><label>End Time (relative to sequence)</label><input type="number" id="propEndTime" value="${effect.endTime}" step="0.1"></div>
+ <div class="property-group"><label>Constructor Arguments</label><input type="text" id="propArgs" value="${effect.args || ''}"></div>
+ <div class="property-group"><label>Stack Position (determines priority)</label>
<div style="display: flex; gap: 5px; margin-bottom: 10px;">
- <button onclick="moveEffectUp()" ${!canMoveUp ? 'disabled' : ''} style="flex: 1;">↑ Up</button>
- <button onclick="moveEffectDown()" ${!canMoveDown ? 'disabled' : ''} style="flex: 1;">↓ Down</button>
+ <button id="moveUpBtn" ${!canMoveUp ? 'disabled' : ''} style="flex: 1;">↑ Up</button>
+ <button id="moveDownBtn" ${!canMoveDown ? 'disabled' : ''} style="flex: 1;">↓ Down</button>
</div>
- <button onclick="toggleSamePriority()" style="width: 100%;">
- ${samePriority ? '✓ Same as Above (=)' : 'Increment (+)'}
- </button>
+ <button id="togglePriorityBtn" style="width: 100%;">${samePriority ? '✓ Same as Above (=)' : 'Increment (+)'}</button>
</div>
`;
+ document.getElementById('propClassName').addEventListener('input', applyProperties);
+ document.getElementById('propStartTime').addEventListener('input', applyProperties);
+ document.getElementById('propEndTime').addEventListener('input', applyProperties);
+ document.getElementById('propArgs').addEventListener('input', applyProperties);
+ document.getElementById('moveUpBtn').addEventListener('click', moveEffectUp);
+ document.getElementById('moveDownBtn').addEventListener('click', moveEffectDown);
+ document.getElementById('togglePriorityBtn').addEventListener('click', toggleSamePriority);
}
}
- // Auto-apply properties on input change (no Apply button needed)
- function autoApplyProperties() {
- if (!selectedItem) return;
-
- if (selectedItem.type === 'sequence') {
- const seq = sequences[selectedItem.index];
+ function applyProperties() {
+ if (!state.selectedItem) return;
+ if (state.selectedItem.type === 'sequence') {
+ const seq = state.sequences[state.selectedItem.index];
seq.name = document.getElementById('propName').value;
seq.startTime = parseFloat(document.getElementById('propStartTime').value);
- } else if (selectedItem.type === 'effect') {
- const effect = sequences[selectedItem.seqIndex].effects[selectedItem.effectIndex];
+ } else if (state.selectedItem.type === 'effect') {
+ const effect = state.sequences[state.selectedItem.seqIndex].effects[state.selectedItem.effectIndex];
effect.className = document.getElementById('propClassName').value;
effect.startTime = parseFloat(document.getElementById('propStartTime').value);
effect.endTime = parseFloat(document.getElementById('propEndTime').value);
effect.args = document.getElementById('propArgs').value;
}
-
- // Re-render timeline (recalculates sequence bounds)
renderTimeline();
}
- // Move effect up in stack (higher priority)
function moveEffectUp() {
- if (!selectedItem || selectedItem.type !== 'effect') return;
-
- const effects = sequences[selectedItem.seqIndex].effects;
- const index = selectedItem.effectIndex;
-
+ if (!state.selectedItem || state.selectedItem.type !== 'effect') return;
+ const effects = state.sequences[state.selectedItem.seqIndex].effects, index = state.selectedItem.effectIndex;
if (index < effects.length - 1) {
- // Swap with effect above
[effects[index], effects[index + 1]] = [effects[index + 1], effects[index]];
- selectedItem.effectIndex = index + 1;
- renderTimeline();
- updateProperties();
+ state.selectedItem.effectIndex = index + 1; renderTimeline(); updateProperties();
}
}
- // Move effect down in stack (lower priority)
function moveEffectDown() {
- if (!selectedItem || selectedItem.type !== 'effect') return;
-
- const effects = sequences[selectedItem.seqIndex].effects;
- const index = selectedItem.effectIndex;
-
+ if (!state.selectedItem || state.selectedItem.type !== 'effect') return;
+ const effects = state.sequences[state.selectedItem.seqIndex].effects, index = state.selectedItem.effectIndex;
if (index > 0) {
- // Swap with effect below
[effects[index], effects[index - 1]] = [effects[index - 1], effects[index]];
- selectedItem.effectIndex = index - 1;
- renderTimeline();
- updateProperties();
+ state.selectedItem.effectIndex = index - 1; renderTimeline(); updateProperties();
}
}
- // Toggle same priority as previous effect (= modifier)
function toggleSamePriority() {
- if (!selectedItem || selectedItem.type !== 'effect') return;
-
- const effect = sequences[selectedItem.seqIndex].effects[selectedItem.effectIndex];
+ if (!state.selectedItem || state.selectedItem.type !== 'effect') return;
+ const effect = state.sequences[state.selectedItem.seqIndex].effects[state.selectedItem.effectIndex];
effect.priorityModifier = effect.priorityModifier === '=' ? '+' : '=';
updateProperties();
}
- // File operations
- fileInput.addEventListener('change', (e) => {
+ // Utilities
+ function showMessage(text, type) {
+ if (type === 'error') console.error(text);
+ dom.messageArea.innerHTML = `<div class="${type}">${text}</div>`;
+ setTimeout(() => dom.messageArea.innerHTML = '', 3000);
+ }
+
+ function updateStats() {
+ const effectCount = state.sequences.reduce((sum, seq) => sum + seq.effects.length, 0);
+ const maxTime = state.sequences.reduce((max, seq) => {
+ const seqMax = seq.effects.reduce((m, e) => Math.max(m, seq.startTime + e.endTime), seq.startTime);
+ return Math.max(max, seqMax);
+ }, 0);
+ dom.stats.innerHTML = `📊 Sequences: ${state.sequences.length} | 🎬 Effects: ${effectCount} | ⏱️ Duration: ${maxTime.toFixed(2)}s`;
+ }
+
+ async function loadFromURLParams() {
+ const params = new URLSearchParams(window.location.search);
+ const seqURL = params.get('seq'), wavURL = params.get('wav');
+ if (seqURL) {
+ try {
+ const response = await fetch(seqURL);
+ if (!response.ok) throw new Error(`HTTP ${response.status}`);
+ const content = await response.text(), parsed = parseSeqFile(content);
+ state.sequences = parsed.sequences; state.bpm = parsed.bpm;
+ dom.currentBPM.textContent = state.bpm; dom.bpmSlider.value = state.bpm;
+ state.currentFile = seqURL.split('/').pop();
+ renderTimeline(); dom.saveBtn.disabled = false; dom.addSequenceBtn.disabled = false; dom.reorderBtn.disabled = false;
+ showMessage(`Loaded ${state.currentFile} from URL`, 'success');
+ } catch (err) { showMessage(`Error loading seq file: ${err.message}`, 'error'); }
+ }
+ if (wavURL) {
+ try {
+ const response = await fetch(wavURL);
+ if (!response.ok) throw new Error(`HTTP ${response.status}`);
+ const blob = await response.blob(), file = new File([blob], wavURL.split('/').pop(), { type: 'audio/wav' });
+ await loadAudioFile(file);
+ } catch (err) { showMessage(`Error loading audio file: ${err.message}`, 'error'); }
+ }
+ }
+
+ // Event handlers
+ dom.fileInput.addEventListener('change', e => {
const file = e.target.files[0];
if (!file) return;
-
- currentFile = file.name;
+ state.currentFile = file.name;
const reader = new FileReader();
-
- reader.onload = (e) => {
+ reader.onload = e => {
try {
const parsed = parseSeqFile(e.target.result);
- sequences = parsed.sequences;
- bpm = parsed.bpm;
- document.getElementById('currentBPM').textContent = bpm;
- document.getElementById('bpmSlider').value = bpm;
- renderTimeline();
- saveBtn.disabled = false;
- addSequenceBtn.disabled = false;
- reorderBtn.disabled = false;
- showMessage(`Loaded ${currentFile} - ${sequences.length} sequences`, 'success');
- } catch (err) {
- showMessage(`Error parsing file: ${err.message}`, 'error');
- }
+ state.sequences = parsed.sequences; state.bpm = parsed.bpm;
+ dom.currentBPM.textContent = state.bpm; dom.bpmSlider.value = state.bpm;
+ renderTimeline(); dom.saveBtn.disabled = false; dom.addSequenceBtn.disabled = false; dom.reorderBtn.disabled = false;
+ showMessage(`Loaded ${state.currentFile} - ${state.sequences.length} sequences`, 'success');
+ } catch (err) { showMessage(`Error parsing file: ${err.message}`, 'error'); }
};
-
reader.readAsText(file);
});
- saveBtn.addEventListener('click', () => {
- const content = serializeSeqFile(sequences);
- const blob = new Blob([content], { type: 'text/plain' });
- const url = URL.createObjectURL(blob);
- const a = document.createElement('a');
- a.href = url;
- a.download = currentFile || 'timeline.seq';
- a.click();
- URL.revokeObjectURL(url);
+ dom.saveBtn.addEventListener('click', () => {
+ const content = serializeSeqFile(state.sequences), blob = new Blob([content], { type: 'text/plain' });
+ const url = URL.createObjectURL(blob), a = document.createElement('a');
+ a.href = url; a.download = state.currentFile || 'timeline.seq'; a.click(); URL.revokeObjectURL(url);
showMessage('File saved', 'success');
});
- audioInput.addEventListener('change', (e) => {
- const file = e.target.files[0];
- if (!file) return;
- loadAudioFile(file);
+ dom.audioInput.addEventListener('change', e => { const file = e.target.files[0]; if (file) loadAudioFile(file); });
+ dom.clearAudioBtn.addEventListener('click', () => { clearAudio(); dom.audioInput.value = ''; });
+ dom.playPauseBtn.addEventListener('click', async () => {
+ if (state.isPlaying) stopPlayback();
+ else { if (state.playbackOffset >= state.audioDuration) state.playbackOffset = 0; await startPlayback(); }
});
- clearAudioBtn.addEventListener('click', () => {
- clearAudio();
- audioInput.value = ''; // Reset file input
+ dom.waveformContainer.addEventListener('click', async e => {
+ if (!state.audioBuffer) return;
+ const rect = dom.waveformContainer.getBoundingClientRect();
+ const clickX = e.clientX - rect.left + dom.timelineContent.scrollLeft;
+ const clickTime = (clickX / state.pixelsPerSecond) * 60.0 / state.bpm;
+ const wasPlaying = state.isPlaying;
+ if (wasPlaying) stopPlayback(false);
+ state.playbackOffset = Math.max(0, Math.min(clickTime, state.audioDuration));
+ const clickBeats = state.playbackOffset * state.bpm / 60.0;
+ dom.playbackTime.textContent = `${state.playbackOffset.toFixed(2)}s (${clickBeats.toFixed(2)}b)`;
+ const indicatorX = clickBeats * state.pixelsPerSecond;
+ dom.playbackIndicator.style.left = `${indicatorX}px`;
+ dom.waveformPlaybackIndicator.style.left = `${indicatorX}px`;
+ if (wasPlaying) await startPlayback();
});
- playPauseBtn.addEventListener('click', () => {
- if (isPlaying) {
- stopPlayback();
- } else {
- // Reset to beginning if at end
- if (playbackOffset >= audioDuration) {
- playbackOffset = 0;
- }
- startPlayback();
- }
- });
-
- // Waveform click to seek
- waveformCanvas.addEventListener('click', (e) => {
- if (!audioBuffer) return;
-
- const rect = waveformCanvas.getBoundingClientRect();
- const clickX = e.clientX - rect.left;
- const audioDurationBeats = audioDuration * bpm / 60.0;
- const clickBeats = (clickX / waveformCanvas.width) * audioDurationBeats;
- const clickTime = clickBeats * 60.0 / bpm;
-
- const wasPlaying = isPlaying;
- if (wasPlaying) {
- stopPlayback();
- }
-
- playbackOffset = Math.max(0, Math.min(clickTime, audioDuration));
-
- if (wasPlaying) {
- startPlayback();
- } else {
- // Update display even when paused
- playbackTime.textContent = `${playbackOffset.toFixed(2)}s`;
- const indicatorX = (playbackOffset * bpm / 60.0) * pixelsPerSecond;
- playbackIndicator.style.left = `${indicatorX}px`;
- }
- });
-
- addSequenceBtn.addEventListener('click', () => {
- sequences.push({
- type: 'sequence',
- startTime: 0,
- priority: 0,
- effects: [],
- _collapsed: true
- });
- renderTimeline();
- showMessage('New sequence added', 'success');
+ dom.addSequenceBtn.addEventListener('click', () => {
+ state.sequences.push({ type: 'sequence', startTime: 0, priority: 0, effects: [], _collapsed: true });
+ renderTimeline(); showMessage('New sequence added', 'success');
});
- deleteBtn.addEventListener('click', () => {
- if (!selectedItem) return;
-
- if (selectedItem.type === 'sequence') {
- sequences.splice(selectedItem.index, 1);
- } else if (selectedItem.type === 'effect') {
- sequences[selectedItem.seqIndex].effects.splice(selectedItem.effectIndex, 1);
- }
-
- selectedItem = null;
- deleteBtn.disabled = true;
- renderTimeline();
- updateProperties();
+ dom.deleteBtn.addEventListener('click', () => {
+ if (!state.selectedItem) return;
+ if (state.selectedItem.type === 'sequence') state.sequences.splice(state.selectedItem.index, 1);
+ else if (state.selectedItem.type === 'effect') state.sequences[state.selectedItem.seqIndex].effects.splice(state.selectedItem.effectIndex, 1);
+ state.selectedItem = null; dom.deleteBtn.disabled = true; renderTimeline(); updateProperties();
showMessage('Item deleted', 'success');
});
- // Re-order sequences by time
- reorderBtn.addEventListener('click', () => {
- // Store current active sequence (if any)
- const currentActiveSeq = lastActiveSeqIndex >= 0 ? sequences[lastActiveSeqIndex] : null;
-
- // Sort sequences by start time (ascending)
- sequences.sort((a, b) => a.startTime - b.startTime);
-
- // Re-render timeline
- renderTimeline();
-
- // Restore focus on previously active sequence
+ dom.reorderBtn.addEventListener('click', () => {
+ const currentActiveSeq = state.lastActiveSeqIndex >= 0 ? state.sequences[state.lastActiveSeqIndex] : null;
+ state.sequences.sort((a, b) => a.startTime - b.startTime); renderTimeline();
if (currentActiveSeq) {
- const newIndex = sequences.indexOf(currentActiveSeq);
- if (newIndex >= 0 && sequences[newIndex]._yPosition !== undefined) {
- // Scroll to keep it in view
- timelineContent.scrollTop = sequences[newIndex]._yPosition;
- lastActiveSeqIndex = newIndex;
+ const newIndex = state.sequences.indexOf(currentActiveSeq);
+ if (newIndex >= 0 && state.sequences[newIndex]._yPosition !== undefined) {
+ dom.timelineContent.scrollTop = state.sequences[newIndex]._yPosition; state.lastActiveSeqIndex = newIndex;
}
}
-
showMessage('Sequences re-ordered by start time', 'success');
});
- // Zoom
- zoomSlider.addEventListener('input', (e) => {
- const zoom = parseInt(e.target.value);
- pixelsPerSecond = zoom;
- zoomLevel.textContent = `${zoom}%`;
- if (audioBuffer) {
- renderWaveform(); // Re-render waveform at new zoom
- }
- renderTimeline();
- });
-
- // BPM slider
- const bpmSlider = document.getElementById('bpmSlider');
- const currentBPMDisplay = document.getElementById('currentBPM');
- bpmSlider.addEventListener('input', (e) => {
- bpm = parseInt(e.target.value);
- currentBPMDisplay.textContent = bpm;
- if (audioBuffer) {
- renderWaveform();
- }
- renderTimeline();
+ dom.zoomSlider.addEventListener('input', e => {
+ state.pixelsPerSecond = parseInt(e.target.value); dom.zoomLevel.textContent = `${state.pixelsPerSecond}%`;
+ if (state.audioBuffer) renderWaveform(); renderTimeline();
});
- // Beats toggle
- const showBeatsCheckbox = document.getElementById('showBeatsCheckbox');
- showBeatsCheckbox.addEventListener('change', (e) => {
- showBeats = e.target.checked;
- renderTimeline();
+ dom.bpmSlider.addEventListener('input', e => {
+ state.bpm = parseInt(e.target.value); dom.currentBPM.textContent = state.bpm;
+ if (state.audioBuffer) renderWaveform(); renderTimeline();
});
- // Properties panel collapse/expand
- const panelToggle = document.getElementById('panelToggle');
- const panelCollapseBtn = document.getElementById('panelCollapseBtn');
-
- panelToggle.addEventListener('click', () => {
- propertiesPanel.classList.add('collapsed');
- panelCollapseBtn.classList.add('visible');
- panelToggle.textContent = '▲ Expand';
- });
+ dom.showBeatsCheckbox.addEventListener('change', e => { state.showBeats = e.target.checked; renderTimeline(); });
+ dom.quantizeSelect.addEventListener('change', e => { state.quantizeUnit = parseFloat(e.target.value); });
+ dom.panelToggle.addEventListener('click', () => { dom.propertiesPanel.classList.add('collapsed'); dom.panelCollapseBtn.classList.add('visible'); dom.panelToggle.textContent = '▲ Expand'; });
+ dom.panelCollapseBtn.addEventListener('click', () => { dom.propertiesPanel.classList.remove('collapsed'); dom.panelCollapseBtn.classList.remove('visible'); dom.panelToggle.textContent = '▼ Collapse'; });
+ dom.timeline.addEventListener('click', () => { state.selectedItem = null; dom.deleteBtn.disabled = true; renderTimeline(); updateProperties(); });
- panelCollapseBtn.addEventListener('click', () => {
- propertiesPanel.classList.remove('collapsed');
- panelCollapseBtn.classList.remove('visible');
- panelToggle.textContent = '▼ Collapse';
+ dom.timeline.addEventListener('dblclick', async e => {
+ if (e.target !== dom.timeline) return;
+ const timelineRect = dom.timeline.getBoundingClientRect();
+ const clickX = e.clientX - timelineRect.left + dom.timelineContent.scrollLeft;
+ const clickBeats = clickX / state.pixelsPerSecond, clickTime = clickBeats * 60.0 / state.bpm;
+ if (state.audioBuffer) {
+ const wasPlaying = state.isPlaying;
+ if (wasPlaying) stopPlayback(false);
+ state.playbackOffset = Math.max(0, Math.min(clickTime, state.audioDuration));
+ const pausedBeats = state.playbackOffset * state.bpm / 60.0;
+ dom.playbackTime.textContent = `${state.playbackOffset.toFixed(2)}s (${pausedBeats.toFixed(2)}b)`;
+ const indicatorX = pausedBeats * state.pixelsPerSecond;
+ dom.playbackIndicator.style.left = `${indicatorX}px`; dom.waveformPlaybackIndicator.style.left = `${indicatorX}px`;
+ if (wasPlaying) await startPlayback();
+ showMessage(`Seek to ${clickTime.toFixed(2)}s (${clickBeats.toFixed(2)}b)`, 'success');
+ }
});
- // Click outside to deselect
- timeline.addEventListener('click', () => {
- selectedItem = null;
- deleteBtn.disabled = true;
- renderTimeline();
- updateProperties();
+ document.addEventListener('keydown', e => {
+ if (e.code === 'Space' && state.audioBuffer) { e.preventDefault(); dom.playPauseBtn.click(); }
+ // Quantize hotkeys: 0=Off, 1=1beat, 2=1/2, 3=1/4, 4=1/8, 5=1/16, 6=1/32
+ const quantizeMap = { '0': '0', '1': '1', '2': '2', '3': '4', '4': '8', '5': '16', '6': '32' };
+ if (quantizeMap[e.key]) {
+ state.quantizeUnit = parseFloat(quantizeMap[e.key]);
+ dom.quantizeSelect.value = quantizeMap[e.key];
+ e.preventDefault();
+ }
});
- // Keyboard shortcuts
- document.addEventListener('keydown', (e) => {
- // Spacebar: play/pause (if audio loaded)
- if (e.code === 'Space' && audioBuffer) {
- e.preventDefault();
- playPauseBtn.click();
+ dom.timelineContent.addEventListener('scroll', () => {
+ if (dom.waveformCanvas) {
+ dom.waveformCanvas.style.left = `-${dom.timelineContent.scrollLeft}px`;
+ dom.waveformPlaybackIndicator.style.transform = `translateX(-${dom.timelineContent.scrollLeft}px)`;
}
});
- // Mouse wheel: zoom (with Ctrl/Cmd) or diagonal scroll
- timelineContent.addEventListener('wheel', (e) => {
+ dom.timelineContent.addEventListener('wheel', e => {
e.preventDefault();
-
- // Zoom mode: Ctrl/Cmd + wheel
if (e.ctrlKey || e.metaKey) {
- // Get mouse position relative to timeline content
- const rect = timelineContent.getBoundingClientRect();
- const mouseX = e.clientX - rect.left; // Mouse X in viewport coordinates
-
- // Calculate time position under cursor BEFORE zoom
- const scrollLeft = timelineContent.scrollLeft;
- const timeUnderCursor = (scrollLeft + mouseX) / pixelsPerSecond;
-
- // Calculate new zoom level
- const zoomDelta = e.deltaY > 0 ? -10 : 10; // Wheel down = zoom out, wheel up = zoom in
- const oldPixelsPerSecond = pixelsPerSecond;
- const newPixelsPerSecond = Math.max(10, Math.min(500, pixelsPerSecond + zoomDelta));
-
+ const rect = dom.timelineContent.getBoundingClientRect(), mouseX = e.clientX - rect.left;
+ const scrollLeft = dom.timelineContent.scrollLeft, timeUnderCursor = (scrollLeft + mouseX) / state.pixelsPerSecond;
+ const zoomDelta = e.deltaY > 0 ? -10 : 10, oldPixelsPerSecond = state.pixelsPerSecond;
+ const newPixelsPerSecond = Math.max(10, Math.min(500, state.pixelsPerSecond + zoomDelta));
if (newPixelsPerSecond !== oldPixelsPerSecond) {
- pixelsPerSecond = newPixelsPerSecond;
-
- // Update zoom slider and labels
- zoomSlider.value = pixelsPerSecond;
- zoomLevel.textContent = `${pixelsPerSecond}%`;
-
- // Re-render waveform and timeline at new zoom
- if (audioBuffer) {
- renderWaveform();
- }
- renderTimeline();
-
- // Adjust scroll position so time under cursor stays in same place
- // After zoom: new_scrollLeft = time_under_cursor * newPixelsPerSecond - mouseX
- const newScrollLeft = timeUnderCursor * newPixelsPerSecond - mouseX;
- timelineContent.scrollLeft = newScrollLeft;
+ state.pixelsPerSecond = newPixelsPerSecond; dom.zoomSlider.value = state.pixelsPerSecond; dom.zoomLevel.textContent = `${state.pixelsPerSecond}%`;
+ if (state.audioBuffer) renderWaveform(); renderTimeline();
+ dom.timelineContent.scrollLeft = timeUnderCursor * newPixelsPerSecond - mouseX;
}
return;
}
-
- // Normal mode: diagonal scroll
- timelineContent.scrollLeft += e.deltaY;
-
- // Calculate current time position with 10% headroom for visual comfort
- const currentScrollLeft = timelineContent.scrollLeft;
- const viewportWidth = timelineContent.clientWidth;
- const slack = (viewportWidth / pixelsPerSecond) * 0.1; // 10% of viewport width in seconds
- const currentTime = (currentScrollLeft / pixelsPerSecond) + slack;
-
- // Find the closest sequence that should be visible at current time
- // (the last sequence that starts before or at current time + slack)
+ dom.timelineContent.scrollLeft += e.deltaY;
+ const currentScrollLeft = dom.timelineContent.scrollLeft, viewportWidth = dom.timelineContent.clientWidth;
+ const slack = (viewportWidth / state.pixelsPerSecond) * 0.1, currentTime = (currentScrollLeft / state.pixelsPerSecond) + slack;
let targetSeqIndex = 0;
- for (let i = 0; i < sequences.length; i++) {
- if (sequences[i].startTime <= currentTime) {
- targetSeqIndex = i;
- } else {
- break;
- }
+ for (let i = 0; i < state.sequences.length; i++) {
+ if (state.sequences[i].startTime <= currentTime) targetSeqIndex = i; else break;
}
-
- // Flash effect when active sequence changes
- if (targetSeqIndex !== lastActiveSeqIndex && sequences.length > 0) {
- lastActiveSeqIndex = targetSeqIndex;
-
- // Add flash class to target sequence
- const seqDivs = timeline.querySelectorAll('.sequence');
+ if (targetSeqIndex !== state.lastActiveSeqIndex && state.sequences.length > 0) {
+ state.lastActiveSeqIndex = targetSeqIndex;
+ const seqDivs = dom.timeline.querySelectorAll('.sequence');
if (seqDivs[targetSeqIndex]) {
seqDivs[targetSeqIndex].classList.add('active-flash');
- // Remove class after animation completes
- setTimeout(() => {
- seqDivs[targetSeqIndex]?.classList.remove('active-flash');
- }, 600);
+ setTimeout(() => seqDivs[targetSeqIndex]?.classList.remove('active-flash'), 600);
}
}
-
- // Smooth vertical scroll to bring target sequence to top of viewport
- const targetScrollTop = sequences[targetSeqIndex]?._yPosition || 0;
- const currentScrollTop = timelineContent.scrollTop;
- const scrollDiff = targetScrollTop - currentScrollTop;
-
- // Smooth transition (don't jump instantly)
- if (Math.abs(scrollDiff) > 5) {
- timelineContent.scrollTop += scrollDiff * 0.3;
- }
+ const targetScrollTop = state.sequences[targetSeqIndex]?._yPosition || 0;
+ const currentScrollTop = dom.timelineContent.scrollTop, scrollDiff = targetScrollTop - currentScrollTop;
+ if (Math.abs(scrollDiff) > 5) dom.timelineContent.scrollTop += scrollDiff * 0.3;
}, { passive: false });
- // Window resize handler
- window.addEventListener('resize', () => {
- renderTimeline();
- });
-
- // Utilities
- function showMessage(text, type) {
- messageArea.innerHTML = `<div class="${type}">${text}</div>`;
- setTimeout(() => messageArea.innerHTML = '', 3000);
- }
-
- function updateStats() {
- const effectCount = sequences.reduce((sum, seq) => sum + seq.effects.length, 0);
- const maxTime = sequences.reduce((max, seq) => {
- const seqMax = seq.effects.reduce((m, e) => Math.max(m, seq.startTime + e.endTime), seq.startTime);
- return Math.max(max, seqMax);
- }, 0);
-
- stats.innerHTML = `
- 📊 Sequences: ${sequences.length} |
- 🎬 Effects: ${effectCount} |
- ⏱️ Duration: ${maxTime.toFixed(2)}s
- `;
- }
-
- // Initial render
- renderTimeline();
+ window.addEventListener('resize', renderTimeline);
+ renderTimeline(); loadFromURLParams();
</script>
</body>
</html>
diff --git a/toto.png b/toto.png
deleted file mode 100644
index 62aa745..0000000
--- a/toto.png
+++ /dev/null
Binary files differ
diff --git a/training/debug/debug.sh b/training/debug.sh
index 083082b..083082b 100755
--- a/training/debug/debug.sh
+++ b/training/debug.sh
diff --git a/training/debug/cur/layer_0.png b/training/debug/cur/layer_0.png
deleted file mode 100644
index 0cb977b..0000000
--- a/training/debug/cur/layer_0.png
+++ /dev/null
Binary files differ
diff --git a/training/debug/cur/layer_1.png b/training/debug/cur/layer_1.png
deleted file mode 100644
index 801aad2..0000000
--- a/training/debug/cur/layer_1.png
+++ /dev/null
Binary files differ
diff --git a/training/debug/cur/toto.png b/training/debug/cur/toto.png
deleted file mode 100644
index 9caff40..0000000
--- a/training/debug/cur/toto.png
+++ /dev/null
Binary files differ
diff --git a/training/debug/ref/layer_0.png b/training/debug/ref/layer_0.png
deleted file mode 100644
index 3e0eebe..0000000
--- a/training/debug/ref/layer_0.png
+++ /dev/null
Binary files differ
diff --git a/training/debug/ref/layer_1.png b/training/debug/ref/layer_1.png
deleted file mode 100644
index d858f80..0000000
--- a/training/debug/ref/layer_1.png
+++ /dev/null
Binary files differ
diff --git a/training/debug/ref/toto.png b/training/debug/ref/toto.png
deleted file mode 100644
index f869a7c..0000000
--- a/training/debug/ref/toto.png
+++ /dev/null
Binary files differ
diff --git a/training/debug/training/checkpoints/checkpoint_epoch_10.pth b/training/debug/training/checkpoints/checkpoint_epoch_10.pth
deleted file mode 100644
index 54ba5c5..0000000
--- a/training/debug/training/checkpoints/checkpoint_epoch_10.pth
+++ /dev/null
Binary files differ
diff --git a/training/debug/training/checkpoints/checkpoint_epoch_100.pth b/training/debug/training/checkpoints/checkpoint_epoch_100.pth
deleted file mode 100644
index f94e9f8..0000000
--- a/training/debug/training/checkpoints/checkpoint_epoch_100.pth
+++ /dev/null
Binary files differ
diff --git a/training/debug/training/checkpoints/checkpoint_epoch_50.pth b/training/debug/training/checkpoints/checkpoint_epoch_50.pth
deleted file mode 100644
index a602f4b..0000000
--- a/training/debug/training/checkpoints/checkpoint_epoch_50.pth
+++ /dev/null
Binary files differ
diff --git a/training/output/ground_truth.png b/training/output/ground_truth.png
deleted file mode 100644
index afa06f6..0000000
--- a/training/output/ground_truth.png
+++ /dev/null
Binary files differ
diff --git a/training/output/img_000.png b/training/output/img_000.png
deleted file mode 100644
index 5a8ae80..0000000
--- a/training/output/img_000.png
+++ /dev/null
Binary files differ
diff --git a/training/output/img_001.png b/training/output/img_001.png
deleted file mode 100644
index c4ac5d0..0000000
--- a/training/output/img_001.png
+++ /dev/null
Binary files differ
diff --git a/training/output/img_002.png b/training/output/img_002.png
deleted file mode 100644
index e4bab4a..0000000
--- a/training/output/img_002.png
+++ /dev/null
Binary files differ
diff --git a/training/output/img_003.png b/training/output/img_003.png
deleted file mode 100644
index a98004a..0000000
--- a/training/output/img_003.png
+++ /dev/null
Binary files differ
diff --git a/training/output/img_004.png b/training/output/img_004.png
deleted file mode 100644
index c186d13..0000000
--- a/training/output/img_004.png
+++ /dev/null
Binary files differ
diff --git a/training/output/img_005.png b/training/output/img_005.png
deleted file mode 100644
index 95cfdd9..0000000
--- a/training/output/img_005.png
+++ /dev/null
Binary files differ
diff --git a/training/output/img_006.png b/training/output/img_006.png
deleted file mode 100644
index d6bd187..0000000
--- a/training/output/img_006.png
+++ /dev/null
Binary files differ
diff --git a/training/output/img_007.png b/training/output/img_007.png
deleted file mode 100644
index f4e3a9a..0000000
--- a/training/output/img_007.png
+++ /dev/null
Binary files differ
diff --git a/training/output/patch_final.png b/training/output/patch_final.png
deleted file mode 100644
index e38512b..0000000
--- a/training/output/patch_final.png
+++ /dev/null
Binary files differ
diff --git a/training/output/patch_gt.png b/training/output/patch_gt.png
deleted file mode 100644
index 277175c..0000000
--- a/training/output/patch_gt.png
+++ /dev/null
Binary files differ
diff --git a/training/output/patch_tool.png b/training/output/patch_tool.png
deleted file mode 100644
index e38512b..0000000
--- a/training/output/patch_tool.png
+++ /dev/null
Binary files differ
diff --git a/training/output/patch_tool_fixed.png b/training/output/patch_tool_fixed.png
deleted file mode 100644
index e38512b..0000000
--- a/training/output/patch_tool_fixed.png
+++ /dev/null
Binary files differ
diff --git a/training/output/test_debug.png b/training/output/test_debug.png
deleted file mode 100644
index e38512b..0000000
--- a/training/output/test_debug.png
+++ /dev/null
Binary files differ
diff --git a/training/output/test_sync.png b/training/output/test_sync.png
deleted file mode 100644
index e38512b..0000000
--- a/training/output/test_sync.png
+++ /dev/null
Binary files differ
diff --git a/training/output/tool_output.png b/training/output/tool_output.png
deleted file mode 100644
index 3fcec54..0000000
--- a/training/output/tool_output.png
+++ /dev/null
Binary files differ
diff --git a/training/toto.png b/training/toto.png
deleted file mode 100644
index 2044840..0000000
--- a/training/toto.png
+++ /dev/null
Binary files differ
diff --git a/workspaces/main/shaders/test_snippet_a.wgsl b/workspaces/main/shaders/test_snippet_a.wgsl
deleted file mode 100644
index 732973d..0000000
--- a/workspaces/main/shaders/test_snippet_a.wgsl
+++ /dev/null
@@ -1,4 +0,0 @@
-// test_snippet_a.wgsl
-fn snippet_a() -> f32 {
- return 1.0;
-}
diff --git a/workspaces/main/shaders/test_snippet_b.wgsl b/workspaces/main/shaders/test_snippet_b.wgsl
deleted file mode 100644
index 071346e..0000000
--- a/workspaces/main/shaders/test_snippet_b.wgsl
+++ /dev/null
@@ -1,4 +0,0 @@
-// test_snippet_b.wgsl
-fn snippet_b() -> f32 {
- return 2.0;
-}
diff --git a/workspaces/main/timeline.seq.backup b/workspaces/main/timeline.seq.backup
deleted file mode 100644
index c3e2316..0000000
--- a/workspaces/main/timeline.seq.backup
+++ /dev/null
@@ -1,105 +0,0 @@
-# Demo Timeline
-# Generated by Timeline Editor
-# BPM 120
-
-SEQUENCE 0.00 0
- EFFECT - FlashCubeEffect 0.00 2.44
- EFFECT + FlashEffect 0.00 1.00 color=1.0,0.5,0.5 decay=0.95
- EFFECT + FadeEffect 0.10 1.00
- EFFECT + SolarizeEffect 0.00 2.00
- EFFECT + VignetteEffect 0.00 2.50 radius=0.6 softness=0.1
-
-SEQUENCE 2.50 0 "rotating cube"
- EFFECT + CircleMaskEffect 0.00 4.00 0.50
- EFFECT + RotatingCubeEffect 0.00 4.00
- EFFECT + GaussianBlurEffect 1.00 2.00 strength=1.0
- EFFECT + GaussianBlurEffect 3.00 4.00 strength=2.0
-
-SEQUENCE 5.93 0
- EFFECT - FlashCubeEffect 0.11 1.45
- EFFECT + FlashEffect 0.00 0.20
-
-SEQUENCE 6.90 1 "spray"
- EFFECT + ParticleSprayEffect 0.00 2.00
- EFFECT + ParticlesEffect 0.00 3.00
- EFFECT = GaussianBlurEffect 0.00 2.00 strength=3.0
-
-SEQUENCE 8.50 2 "Hybrid3D"
- EFFECT + ThemeModulationEffect 0.00 2.00
- EFFECT + HeptagonEffect 0.20 2.00
- EFFECT + ParticleSprayEffect 0.00 2.00
- EFFECT = ParticlesEffect 0.00 2.00
- EFFECT + Hybrid3DEffect 0.00 2.00
- EFFECT + GaussianBlurEffect 0.00 2.00
- EFFECT + CNNEffect 0.0 2.0 layers=3 blend=.9
-# EFFECT + ChromaAberrationEffect 0.00 1.50 offset=0.01 angle=1.57
-
-SEQUENCE 10.50 0 "CNN effect"
- EFFECT + HeptagonEffect 0.0 12.00
-# EFFECT + RotatingCubeEffect 0.00 12.0
-# EFFECT + Hybrid3DEffect 0.00 12.00
- EFFECT + Scene1Effect 0.0 12.0
- EFFECT + CNNEffect 1.0 12.0 layers=3 blend=.5
-
-SEQUENCE 22.0 0 "buggy"
- EFFECT + HeptagonEffect 0.00 0.20
- EFFECT + FadeEffect 0.11 1.01
-
-SEQUENCE 22.14 3
- EFFECT + ThemeModulationEffect 0.00 4.00
- EFFECT = HeptagonEffect 0.00 4.00
- EFFECT + GaussianBlurEffect 0.00 5.00 strength=1.5
- EFFECT + ChromaAberrationEffect 0.00 5.00 offset=0.03 angle=0.785
- EFFECT + SolarizeEffect 0.00 5.00
-
-SEQUENCE 23.00 2
- EFFECT - FlashCubeEffect 0.20 1.50
- EFFECT + HeptagonEffect 0.00 2.00
- EFFECT + ParticleSprayEffect 0.00 2.00
- EFFECT + ParticlesEffect 0.00 2.00
-
-SEQUENCE 22.75 2 "Fade"
- EFFECT - FlashCubeEffect 0.20 1.50
- EFFECT + FlashEffect 0.00 1.00
-
-SEQUENCE 23.88 10
- EFFECT - FlashCubeEffect 0.20 1.50
- EFFECT + GaussianBlurEffect 0.00 2.00
- EFFECT + FlashEffect 0.00 0.20
- EFFECT = FlashEffect 0.50 0.20
-
-SEQUENCE 25.59 1
- EFFECT + ThemeModulationEffect 0.00 8.00
- EFFECT + HeptagonEffect 0.20 2.00
- EFFECT + ParticleSprayEffect 0.00 8.00
- EFFECT + Hybrid3DEffect 0.00 8.06
- EFFECT + GaussianBlurEffect 0.00 8.00
- EFFECT + ChromaAberrationEffect 0.00 8.14
- EFFECT + SolarizeEffect 0.00 7.88
-
-SEQUENCE 33.08 0
- EFFECT + ThemeModulationEffect 0.00 3.00
- EFFECT + VignetteEffect 0.00 3.00 radius=0.6 softness=0.3
- EFFECT + SolarizeEffect 0.00 3.00
-
-SEQUENCE 35.31 0
- EFFECT + ThemeModulationEffect 0.00 4.00
- EFFECT + HeptagonEffect 0.20 2.00
- EFFECT + GaussianBlurEffect 0.00 8.00
- EFFECT + SolarizeEffect 0.00 2.00
-
-SEQUENCE 42.29 0
- EFFECT + ThemeModulationEffect 0.00 6.00
- EFFECT = HeptagonEffect 0.20 2.00
- EFFECT + Hybrid3DEffect 0.00 4.00
- EFFECT + ParticleSprayEffect 0.00 5.50
- EFFECT + HeptagonEffect 0.00 8.00
- EFFECT + ChromaAberrationEffect 0.00 7.50
- EFFECT + GaussianBlurEffect 0.00 8.00
-
-SEQUENCE 50.02 0
- EFFECT + ThemeModulationEffect 0.00 4.00
- EFFECT + HeptagonEffect 0.00 9.50
- EFFECT + ChromaAberrationEffect 0.00 9.00
- EFFECT + GaussianBlurEffect 0.00 8.00
-
diff --git a/workspaces/test/timeline.seq.backup b/workspaces/test/timeline.seq.backup
deleted file mode 100644
index 100c7da..0000000
--- a/workspaces/test/timeline.seq.backup
+++ /dev/null
@@ -1,8 +0,0 @@
-# WORKSPACE: test
-# Minimal timeline for audio/visual sync testing
-# BPM 120 (set in test_demo.track)
-
-SEQUENCE 0.0 0 "Main Loop"
- EFFECT + FlashEffect 0.0 16.0
-
-END_DEMO 32b