# Architectural Overview Detailed system architecture for the 64k demo project. --- ## Hybrid 3D Renderer **Core Idea**: Uses standard rasterization to draw proxy hulls (boxes), then raymarches inside the fragment shader to find the exact SDF surface. **Transforms**: Uses `inv_model` matrices to perform all raymarching in local object space, handling rotation and non-uniform scaling correctly. **Shadows**: Instance-based shadow casting with self-shadowing prevention (`skip_idx`). --- ## Sequence & Effect System **Effect**: Abstract base for visual elements. Supports `compute` and `render` phases. **Sequence**: Timeline of effects with start/end times defined in beats. **MainSequence**: Top-level coordinator and framebuffer manager. **seq_compiler**: Transpiles workspace `timeline.seq` (beat-based) into C++ `timeline.cc` (seconds). ### Beat-Based Timing **Timeline Notation**: Sequences authored in musical beats (default) or explicit seconds (`s` suffix). **Runtime Conversion**: Beats → seconds at compile time using BPM. Effects activate at physical seconds. **Uniform Timing**: Effects receive both: - `time` - Physical seconds (constant, unaffected by tempo) - `beat_time` - Musical beats (from audio playback clock) - `beat_phase` - Fractional beat 0.0-1.0 **Tempo Separation**: Variable tempo scales `music_time` for audio triggering only. Visual rendering uses constant physical time with optional beat synchronization. See `doc/BEAT_TIMING.md` for details. --- ## Asset & Build System **asset_packer**: Embeds binary assets (like `.spec` files) into C++ arrays. **Runtime Manager**: O(1) retrieval with lazy procedural generation support. **Automation**: `gen_assets.sh`, `build_win.sh`, and `check_all.sh` for multi-platform validation. --- ## Audio Engine ### Synthesis Real-time additive synthesis from spectrograms via FFT-based IDCT (O(N log N)). Stereo output (32kHz, 16-bit, interleaved L/R). Uses orthonormal DCT-II/DCT-III transforms with Numerical Recipes reordering method. ### Variable Tempo Music time abstraction with configurable `tempo_scale`. Tempo changes don't affect pitch. **Visual effects unaffected** - they use physical time, not tempo-scaled music time. ### Event-Based Tracker Individual TrackerEvents trigger as separate voices with dynamic beat calculation. Notes within patterns respect tempo scaling. Triggers based on `music_time` (tempo-scaled). ### Backend Abstraction `AudioBackend` interface with `MiniaudioBackend` (production), `MockAudioBackend` (testing), and `WavDumpBackend` (offline rendering). ### Dynamic Updates Double-buffered spectrograms for live thread-safe updates. ### Procedural Library Melodies and spectral filters (noise, comb) generated at runtime. ### Pattern System TrackerPatterns contain lists of TrackerEvents (beat, sample_id, volume, pan). Events trigger individually based on elapsed music time.