| Age | Commit message (Collapse) | Author |
|
Update positional encoding to use vertical coordinate at higher frequency.
Changes:
- train_cnn_v2.py: sin10_x → sin20_y (computed from uv_y)
- cnn_v2_static.wgsl: sin10_x → sin20_y (computed from uv_y)
- index.html: sin10_x → sin20_y (STATIC_SHADER)
- CNN_V2.md: Update feature descriptions and examples
- CNN_V2_BINARY_FORMAT.md: Update static features documentation
Feature vector: [p0, p1, p2, p3, uv_x, uv_y, sin20_y, bias]
Rationale: Higher frequency (20 vs 10) + vertical axis provides better
spatial discrimination for position encoding.
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
|
|
Parse v2 header (20 bytes) and read mip_level field.
Display mip_level in metadata panel, set UI dropdown on load.
Changes:
- parseWeights(): Handle v1 (16-byte) and v2 (20-byte) headers
- Read mip_level from header[4] for version 2
- Return mipLevel in parsed weights object
- updateWeightsPanel(): Display mip level in metadata
- loadWeights(): Set this.mipLevel and update UI dropdown
Backward compatible: v1 weights → mipLevel=0
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
|
|
Refactoring:
- Extract FULLSCREEN_QUAD_VS shader (reused in mipmap, display, layer viz)
- Add helper methods: getDimensions(), setVideoControlsEnabled()
- Add section headers and improve code organization (~40 lines saved)
- Move Mip Level selector to bottom of left sidebar
- Remove "Features (p0-p3)" panel header
Features:
- Add video loop support (continuous playback)
Documentation:
- Update CNN_V2_WEB_TOOL.md with latest changes
- Document refactoring benefits and code organization
- Update UI layout section with current structure
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
|
|
Add dropdown menu in left panel to select mip levels 0-2 for parametric features (p0-p3/RGBD). Uses trilinear filtering for smooth downsampling at higher mip levels.
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
|
|
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
|
|
Layer 0 now uses clamp [0,1] in both training and inference (was using ReLU in shaders).
- index.html: Add is_layer_0 flag to LayerParams, handle Layer 0 separately
- export_cnn_v2_shader.py: Generate correct activation for Layer 0
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
|
|
- Change Depth control from number input to slider (0-1 range)
- Move video controls to floating overlay at top of canvas
- Remove View mode indicator from header (shortcuts still work)
- Remove scrollbar from Layer Visualization panel
- Fix layer viz flickering during video playback
- Fix video controls responsiveness during playback
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
|
|
Features:
- Video file support (MP4, WebM, etc.) via drag-and-drop
- Play/Pause button with non-realtime playback (drops frames if CNN slow)
- Frame-by-frame navigation (◄/► step buttons)
- Unified image/video processing through same CNN pipeline
- Audio muted (video frames only)
Optimizations:
- Layer visualization updates only on pause/seek (~5-10ms saved per frame)
Architecture:
- copyExternalImageToTexture() works with both ImageBitmap and HTMLVideoElement
- Video loading: wait for metadata → seek to frame 0 → wait for readyState≥2 (decoded)
- Playback loop: requestAnimationFrame with isProcessing guard prevents overlapping inference
- Controls always visible, disabled for images
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
|
|
UI Changes:
- Three-panel layout: left (weights), center (canvas), right (activations)
- Left sidebar: clickable weights drop zone, weights info, kernel visualization
- Right sidebar: 4 small activation views + large 4× zoom view
- Controls moved to header (inline with title)
Weights Visualization:
- Dedicated panel in left sidebar with layer buttons
- 1 pixel per weight (was 20px)
- All input channels horizontal, output channels stacked vertically
- Renders to separate canvas (not in activation grid)
Activation Viewer:
- 4 channels in horizontal row (was 2×2 grid)
- Mouse-driven zoom view below (32×32 area at 4× magnification)
- Zoom shows all 4 channels in 2×2 quadrant layout
- Removed activations/weights mode toggle
State Preservation:
- Blend changes preserve selected layer/channel
- Fixed activation view reset bug
Documentation:
- Updated README with new layout and feature descriptions
- Marked implemented features (weights viz, layer viewer)
- Updated size estimates (~22 KB total)
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
|
|
Fixed validation error where staticTex was used for both storage write
(in static compute pass) and texture read (in CNN bind group) within
same command encoder. Now uses layerTextures[0] for reading, which is
the copy destination and safe for read-only access.
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
|
|
- Align layer naming with codebase: Layer 0/1/2 (not Layer 1/2/3)
- Split static features: Static 0-3 (p0-p3) and Static 4-7 (uv,sin,bias)
- Fix Layer 2 not appearing: removed isOutput filter from layerOutputs
- Fix canvas context switching: force clear before recreation
- Disable static buttons in weights mode
- Add ASCII pipeline diagram to CNN_V2.md
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
|
|
Fixes test_assets.cc compilation by adding missing test asset IDs and
procedural generators. Test-specific code is protected with DEMO_STRIP_ALL
to exclude from release builds.
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
|
|
- tracker_compiler: Sort events by time before C++ generation (required
for runtime early-exit optimization)
- tracker.cc: Add FATAL_CHECK validating sorted events at init
- Add --check mode: Validate .track file without compiling
- Add --sanitize mode: Rewrite .track with sorted events and normalized
formatting
- Fix parser: Skip indented comment lines in patterns
All audio tests passing.
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
|
|
**Architecture changes:**
- Static features (8D): p0-p3 (parametric) + uv_x, uv_y, sin(10×uv_x), bias
- Input RGBD (4D): fed separately to all layers
- All layers: uniform 12D→4D (4 prev/input + 8 static → 4 output)
- Bias integrated in static features (bias=False in PyTorch)
**Weight calculations:**
- 3 layers × (12 × 3×3 × 4) = 1296 weights
- f16: 2.6 KB (vs old variable arch: ~6.4 KB)
**Updated files:**
*Training (Python):*
- train_cnn_v2.py: Uniform model, takes input_rgbd + static_features
- export_cnn_v2_weights.py: Binary export for storage buffers
- export_cnn_v2_shader.py: Per-layer shader export (debugging)
*Shaders (WGSL):*
- cnn_v2_static.wgsl: p0-p3 parametric features (mips/gradients)
- cnn_v2_compute.wgsl: 12D input, 4D output, vec4 packing
*Tools:*
- HTML tool (cnn_v2_test): Updated for 12D→4D, layer visualization
*Docs:*
- CNN_V2.md: Updated architecture, training, validation sections
- HOWTO.md: Reference HTML tool for validation
*Removed:*
- validate_cnn_v2.sh: Obsolete (used CNN v1 tool)
All code consistent with bias=False (bias in static features as 1.0).
handoff(Claude): CNN v2 architecture finalized and documented
|
|
- Rename 'Static (L0)' → 'Static' (clearer, less confusing)
- Update channel labels: 'R/G/B/D' → 'Ch0 (R)/Ch1 (G)/Ch2 (B)/Ch3 (D)'
- Add 'Layer' prefix in weights table for consistency
- Document layer indexing: Static + Layer 1,2,3... (UI) ↔ weights.layers[0,1,2...]
- Add explanatory notes about 7D input and 4-of-8 channel display
- Create doc/CNN_V2_BINARY_FORMAT.md with complete .bin specification
- Cross-reference spec in CNN_V2.md and CNN_V2_WEB_TOOL.md
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
|
|
Features:
- Right sidebar with Layer Visualization (top) and Weights Info (collapsible, bottom)
- Activations mode: 4-channel grayscale views per layer (Static L0 + CNN layers)
- Weights mode: Kernel visualization with 2D canvas rendering
- Mode tabs to switch between activation and weight inspection
- Per-layer texture storage (separate from ping-pong compute buffers)
- Debug shader modes (UV gradient, raw packed data, unpacked f16)
- Comprehensive logging for diagnostics
Architecture:
- Persistent layerTextures[] for visualization (one per layer)
- Separate computeTextures[] for CNN ping-pong
- copyTextureToTexture after each layer pass
- Canvas recreation on mode switch (2D vs WebGPU context)
- Weight parsing with f16 unpacking and min/max calculation
Known Issues:
- Layer activations show black (texture data empty despite copies)
- Weight kernels not displaying (2D canvas renders not visible)
- Debug mode 10 (UV gradient) works, confirming texture access OK
- Root cause: likely GPU command ordering or texture usage flags
Documentation:
- Added doc/CNN_V2_WEB_TOOL.md with full status, architecture, debug steps
- Detailed issue tracking with investigation notes and next steps
Status: Infrastructure complete, debugging data flow issues.
handoff(Claude): Layer viz black due to empty textures despite copyTextureToTexture.
Weight viz black despite correct canvas setup. Both issues need GPU pipeline audit.
|
|
Implements single-file HTML tool for rapid CNN weight validation:
Features:
- Drag-drop PNG images (whole window) and .bin weights
- Real-time WebGPU compute pipeline (static features + N layers)
- Data-driven execution (reads layer count from binary)
- View modes: CNN output / Original / Diff (×10)
- Blend slider (0.0-1.0) for effect strength
- Console log with timestamps
- Keyboard shortcuts: SPACE (original), D (diff)
Architecture:
- Embedded WGSL shaders (static + compute + display)
- Binary parser for .bin format (header + layer info + f16 weights)
- Persistent textures for view mode switching
- Absolute weight offset calculation (header + layer info skip)
Implementation notes:
- Weight offsets in binary are relative to weights section
- JavaScript precalculates absolute offsets: headerOffsetU32 * 2 + offset
- Matches C++ shader behavior (simple get_weight without offset param)
- Ping-pong textures for multi-layer processing
TODO:
- Side panel: .bin metadata, weight statistics, validation
- Layer inspection: R/G/B/A plane split, intermediate outputs
- Activation heatmaps for debugging
Files:
- tools/cnn_v2_test/index.html (24 KB, 730 lines)
- tools/cnn_v2_test/README.md (usage guide, troubleshooting)
handoff(Claude): CNN v2 HTML testing tool complete, documented TODOs for future enhancements
|
|
Consolidate repeated timeline/resource analysis code to improve
maintainability and reduce duplication.
seq_compiler.cc changes:
- Extract timeline analysis (max time, sorting) into analyze_timeline()
- Extract sequence end calculation into get_sequence_end()
- Reduces ~45 lines of duplicate code
tracker_compiler.cc changes:
- Extract resource analysis into ResourceAnalysis struct
- Consolidate sample counting and recommendations
- Reduces ~75 lines of duplicate code
Both tools verified with successful builds.
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
|
|
Extract repeated logic into focused helper functions:
- ParseProceduralParams() - eliminates 2× duplicate parameter parsing
- ParseProceduralFunction() - unifies PROC() and PROC_GPU() handling
- ProcessMeshFile() - encapsulates 164-line mesh processing
- ProcessImageFile() - encapsulates image loading
Reduces 598→568 lines (-5%), improves readability, preserves behavior.
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
EOF
|
|
Eliminates 36 duplicate shader files across workspaces.
Structure:
- common/shaders/{math,render,compute}/ - Shared utilities (20 files)
- workspaces/*/shaders/ - Workspace-specific only
Changes:
- Created common/shaders/ with math, render, compute subdirectories
- Moved 20 common shaders from workspaces to common/
- Removed duplicates from test workspace
- Updated assets.txt: ../../common/shaders/ references
- Enhanced asset_packer.cc: filesystem path normalization for ../ resolution
Implementation: Option 1 from SHADER_REUSE_INVESTIGATION.md
- Single source of truth for common code
- Workspace references via relative paths
- Path normalization in asset packer
handoff(Claude): Common shader directory implemented
|
|
Workspace structure now:
- workspaces/{main,test}/obj/ (3D models)
- workspaces/{main,test}/shaders/ (WGSL shaders)
- workspaces/{main,test}/music/ (audio samples)
Changes:
- Moved workspaces/*/assets/music/ → workspaces/*/music/
- Updated assets.txt paths (assets/music/ → music/)
- Moved test_demo.{seq,track} to tools/
- Moved assets/originals/ → tools/originals/
- Removed assets/common/ (legacy, duplicated in workspaces)
- Removed assets/final/ (legacy, superseded by workspaces)
- Updated hot-reload paths in main.cc
- Updated CMake references for test_demo and validation
- Updated gen_spectrograms.sh paths
handoff(Claude): Workspace reorganization complete
|
|
- Add --cnn-version <1|2> flag to select between CNN v1 and v2
- Implement beat_phase modulation for dynamic blend in both CNN effects
- Fix CNN v2 per-layer uniform buffer sharing (each layer needs own buffer)
- Fix CNN v2 y-axis orientation to match render pass convention
- Add Scene1Effect as base visual layer to test_demo timeline
- Reorganize CNN v2 shaders into cnn_v2/ subdirectory
- Update asset paths and documentation for new shader organization
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
|
|
Use container's getBoundingClientRect instead of timeline's. Timeline can
scroll off-screen with negative left values. Container stays visible and
provides reliable viewport coordinates. Fixes double-click seek and drag.
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
|
|
Read actual canvas offset from style.left instead of assuming scrollLeft.
Canvas is positioned with negative left offset, so we subtract it to get
correct beat position.
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
|
|
Use helper functions (beatsToTime, timeToBeats) consistently in click
handlers. Fixes red cursor jumping to wrong position during seek.
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
|
|
Factorize common patterns: POST_PROCESS_EFFECTS constant, helper functions
(beatsToTime, timeToBeats, beatRange, detectConflicts). Reduce verbosity
with modern JS features (nullish coalescing, optional chaining, Object.assign).
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
|
|
Detects same-priority post-process effects (within and cross-sequence).
CPU load bar (10px, pastel colors) shows conflicts. Matches seq_compiler.cc
logic: warns when 2+ post-process effects share priority, regardless of
time overlap.
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
|
|
Features:
- CPU load bar: Color-coded (green→yellow→red) effect density visualization
- Overlays under waveform to save space, always visible
- Constant load (1.0) per active effect, 0.1 beat resolution
- Add Effect button: Create new effects in selected sequence
- Delete buttons in properties panel for quick access
- Timeline favicon (green bars SVG)
Fixes:
- Handle drag no longer jumps on mousedown (offset tracking)
- Sequence name input accepts numbers (explicit inputmode)
- Start Time label corrected (beats, not seconds)
Updated timeline.seq with beat-based timing adjustments.
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
|
|
Fixes:
- Sequence dragging with scroll offset
- Double-click collapse/expand (DOM recreation issue)
- Collapsed sequence dragging (removed stopPropagation)
Features:
- Quantize dropdown (Off, 1/32→1 beat) replaces snap-to-beat checkbox
- Works in both beat and second display modes
- Hotkeys: 0=Off, 1=1beat, 2=1/2, 3=1/4, 4=1/8, 5=1/16, 6=1/32
- Separate "Show Beats" toggle for display vs snap behavior
Technical:
- Track dragMoved state to avoid unnecessary DOM recreation
- Preserve dblclick detection by deferring renderTimeline()
- Quantization applies to sequences and effects uniformly
handoff(Claude): timeline editor quantize + drag fixes complete
|
|
Reduced file size from 1899 to 823 lines (57% reduction) while improving
maintainability and user experience.
CSS improvements:
- Added CSS variables for colors, spacing, and border radius
- Consolidated duplicate button/input/label styles
- Added missing .zoom-controls class definition
- Reduced CSS from ~510 to ~100 lines
JavaScript refactoring:
- Centralized global state into single `state` object
- Created `dom` object to cache all element references
- Removed all inline event handlers (onclick, oninput)
- Replaced with proper addEventListener pattern
- Fixed missing playbackControls reference (bug fix)
- Reduced JS from ~1320 to ~660 lines
UX improvements:
- Playback indicators (red bars) now always visible, start at 0s
- During playback, highlight current sequence green (no expand/collapse reflow)
- Smooth scrolling follows playback indicator (10% interpolation at 40% viewport)
- Moved "Show Beats" checkbox inline with BPM controls
- Fixed playback controls layout (time left of button, proper gap/alignment)
- Error messages now logged to console as well as UI
No functional regressions - all features work identically.
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
|
|
- Add red bar playback indicator on waveform (synced with timeline)
- Fix playback continuation after double-click seek (async/await)
- Improve stopPlayback() to preserve jump positions
- Add error handling to startPlayback()
- Update waveform click-to-seek to match double-click behavior
- Sync waveform indicator scroll with timeline
- Display time in both seconds and beats on seek
- Update documentation with new features
handoff(Claude): Timeline editor now has dual playback indicators and seamless seeking.
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
|
|
- Audio playback controls (play/pause, spacebar shortcut)
- Red playback indicator with auto-scroll (middle third viewport)
- Auto-expand active sequence during playback, collapse previous
- Click waveform to seek
- Sticky header: waveform + timeline ticks stay at top
- Sequences confined to separate scrollable container below header
- Document known bugs: zoom sync, positioning, reflow issues
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
|
|
Timeline editor now stores all times internally as beats (not seconds),
aligning with the project's beat-based timing system. Added BPM slider
for tempo control. Serializes to beats (default format) and displays
beats primarily with seconds in tooltips.
Changes:
- parseTime() returns beats (converts 's' suffix to beats)
- serializeSeqFile() outputs beats (bare numbers)
- Timeline markers show beats (4-beat/bar increments)
- BPM slider (60-200) for tempo editing
- Snap-to-beat rounds to nearest beat
- Audio waveform aligned to beats
- showBeats enabled by default
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
|
|
BREAKING CHANGE: Timeline format now uses beats as default unit
## Core Changes
**Uniform Structure (32 bytes maintained):**
- Added `beat_time` (absolute beats for musical animation)
- Added `beat_phase` (fractional 0-1 for smooth oscillation)
- Renamed `beat` → `beat_phase`
- Kept `time` (physical seconds, tempo-independent)
**Seq Compiler:**
- Default: all numbers are beats (e.g., `5`, `16.5`)
- Explicit seconds: `2.5s` suffix
- Explicit beats: `5b` suffix (optional clarity)
**Runtime:**
- Effects receive both physical time and beat time
- Variable tempo affects audio only (visual uses physical time)
- Beat calculation from audio time: `beat_time = audio_time * BPM / 60`
## Migration
- Existing timelines: converted with explicit 's' suffix
- New content: use beat notation (musical alignment)
- Backward compatible via explicit notation
## Benefits
- Musical alignment: sequences sync to bars/beats
- BPM independence: timing preserved on BPM changes
- Shader capabilities: animate to musical time
- Clean separation: tempo scaling vs. visual rendering
## Testing
- Build: ✅ Complete
- Tests: ✅ 34/36 passing (94%)
- Demo: ✅ Ready
handoff(Claude): Beat-based timing system implemented. Variable tempo
only affects audio sample triggering. Visual effects use physical_time
(constant) and beat_time (musical). Shaders can now animate to beats.
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
|
|
- Sequences start collapsed by default for better overview
- Move controls to header for vertical space savings
- Remove "Pixels per second" label (redundant with zoom %)
- Move properties panel to bottom left
- Update collapse arrows for vertical layout (▼/▲)
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
|
|
- Sticky time markers stay visible when scrolling
- Faint vertical grid lines aligned with ticks for better alignment
- Collapsible sequences via double-click (35px collapsed state)
- Updated all references from demo.seq to timeline.seq
- Consolidated and tightened documentation
- Fixed _collapsed initialization bug
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
|
|
- Fix bias division bug: divide by num_positions to compensate for
shader loop accumulation (affects all layers)
- train_cnn.py: Save RGBA output preserving alpha channel from input
- Add --debug-hex flag to both tools for pixel-level debugging
- Remove sRGB/linear_png debug code from cnn_test
- Regenerate weights with corrected bias export
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
|
|
SamplerCache singleton never released samplers, causing device to retain
references at shutdown. Add clear() method and call before fixture cleanup.
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
|
|
- Release queue reference after submit in texture_readback
- Add final wgpuDevicePoll before cleanup to sync GPU work
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
|
|
|
|
|
|
Fixed buffer mapping callback mode mismatch causing Unknown status.
Changed from WaitAnyOnly+ProcessEvents to AllowProcessEvents+DevicePoll.
Readback now functional but CNN output incorrect (all white).
Issue isolated to tool-specific binding/uniform setup - CNNEffect
in demo works correctly.
Technical details:
- WGPUCallbackMode_WaitAnyOnly requires wgpuInstanceWaitAny
- Using wgpuInstanceProcessEvents with WaitAnyOnly never fires callback
- Fixed by using AllowProcessEvents mode + wgpuDevicePoll
- Removed debug output and platform warnings
Status: 36/36 tests pass, readback works, CNN shader issue remains.
handoff(Claude): CNN test tool readback fixed, output debugging needed
|
|
Debug additions:
- Print loaded shader size (confirms assets work: 2274 bytes)
- Add wgpuDevicePoll after each layer for GPU sync
- Verify shader loading with null/empty checks
Findings:
- Shader loads correctly (2274 bytes)
- GPU commands execute without validation errors
- Pipeline compiles successfully
- Output remains all-black despite correct architecture
Likely causes:
- Render setup differs from demo's CNNEffect
- Possible issue with bind group bindings
- Fragment shader may not be executing
- Texture sampling might be failing
Next steps:
- Create minimal solid-color render test
- Compare bind group setup with working CNNEffect
- Add fragment shader debug output (if possible)
- Test with demo's CNN effect to verify weights/shader work
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
|
|
Bugfixes:
- Fixed ping-pong logic: update current_input BEFORE flipping dst_idx
- Use RGBA16Float for intermediate layers (preserve [-1,1] range from tanh)
- Separate BGRA8Unorm final output texture for readback
- Create two pipelines: intermediate (RGBA16Float) and final (BGRA8Unorm)
- Fix all cleanup code to reference correct pipeline variables
Implementation:
- Intermediate textures use RGBA16Float to avoid clamping [-1,1] → [0,1]
- Final layer renders to separate BGRA8Unorm texture
- Correct texture view descriptors for each format
- Layer 0-1: render to RGBA16Float ping-pong textures
- Layer 2: render to BGRA8Unorm output texture
Documentation:
- Added CNN testing section to doc/HOWTO.md
- Updated CNN_TEST_TOOL.md with ground-truth comparison workflow
- Noted remaining black output bug (under investigation)
Status:
- Tool compiles and runs without GPU errors
- Architecture correct: ping-pong, format conversion, separate pipelines
- Output still all-black (unknown cause, needs debugging)
- All 36 tests still pass
handoff(Claude): CNN test tool bugfixes complete, black output remains
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
|
|
Core GPU Utility (texture_readback):
- Reusable synchronous texture-to-CPU readback (~150 lines)
- STRIP_ALL guards (0 bytes in release builds)
- Handles COPY_BYTES_PER_ROW_ALIGNMENT (256-byte alignment)
- Refactored OffscreenRenderTarget to use new utility
CNN Test Tool (cnn_test):
- Standalone PNG→3-layer CNN→PNG/PPM tool (~450 lines)
- --blend parameter (0.0-1.0) for final layer mixing
- --format option (png/ppm) for output format
- ShaderComposer integration for include resolution
Build Integration:
- Added texture_readback.cc to GPU_SOURCES (both sections)
- Tool target with STB_IMAGE support
Testing:
- All 36 tests pass (100%)
- Processes 64×64 and 555×370 images successfully
- Ground-truth validation setup complete
Known Issues:
- BUG: Tool produces black output (uninitialized input texture)
- First intermediate texture not initialized before layer loop
- MSE 64860 vs Python ground truth (expected <10)
- Fix required: Copy input to intermediate[0] before processing
Documentation:
- doc/CNN_TEST_TOOL.md - Full technical reference
- Updated PROJECT_CONTEXT.md and COMPLETED.md
handoff(Claude): CNN test tool foundation complete, needs input init bugfix
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
|
|
Allows regenerating just the .wgsl shader file without touching
.h/.cc files when iterating on shader code.
Usage: ./tools/shadertoy/convert_shadertoy.py shader.txt EffectName --shader-only
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
|
|
ShaderToy uses bottom-left origin with Y-up, but our system uses
top-left origin with Y-down. Added Y-flip in fragment shader to
correctly display ShaderToy effects.
**Changes:**
- workspaces/main/shaders/scene1.wgsl: Flip Y before coordinate conversion
- tools/shadertoy/convert_shadertoy.py: Generate Y-flip in all conversions
**Formula:**
```wgsl
let flipped = vec2<f32>(p.x, uniforms.resolution.y - p.y);
```
This ensures ShaderToy shaders display right-side up.
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
|
|
Major improvements to reduce manual code changes after conversion:
**Scene vs Post-Process Detection:**
- Added --post-process flag (default: scene effect)
- Scene effects: Simple pattern like HeptagonEffect (no texture input)
- Post-process effects: Uses PostProcessEffect base class
**Generated Code Now Compiles As-Is:**
- Scene: Uses gpu_create_render_pass() helper
- Post-process: Uses create_post_process_pipeline() helper
- No manual Effect base class rewrites needed
- Correct shader bindings for each type
**Improved WGSL Conversion:**
- Better mainImage extraction and conversion
- Proper fragCoord -> p.xy mapping
- Handles iResolution/iTime -> uniforms automatically
- Fixed return statements (fragColor = ... -> return ...)
- Preserves helper functions from original shader
**Better Instructions:**
- Shows exact asset.txt format with SHADER_ prefix
- Includes shader declaration/definition steps
- Indicates correct test list (scene_effects vs post_process_effects)
**Example:**
```bash
./tools/shadertoy/convert_shadertoy.py shader.txt MyEffect
# Generates compile-ready scene effect
./tools/shadertoy/convert_shadertoy.py blur.txt Blur --post-process
# Generates compile-ready post-process effect
```
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
|
|
Fix EFFECT keyword format across all documentation and scripts - priority
modifier (+/=/–) is required but was missing from examples.
**Documentation fixes:**
- doc/HOWTO.md: Added missing + to EFFECT example
- doc/RECIPE.md: Added priority modifiers to examples
- tools/shadertoy/README.md: Fixed test path, clarified workflow
- tools/shadertoy/convert_shadertoy.py: Updated output instructions
**New automation guide:**
- doc/EFFECT_WORKFLOW.md: Complete step-by-step checklist for AI agents
- Exact file paths and line numbers
- Common issues and fixes
- Asset ID naming conventions
- CMakeLists.txt dual-section requirement
- Test list instructions (post_process_effects vs scene_effects)
**Integration:**
- CLAUDE.md: Added EFFECT_WORKFLOW.md to Tier 2 (always loaded)
- doc/AI_RULES.md: Added "Adding Visual Effects" quick reference
- README.md: Added EFFECT_WORKFLOW.md to documentation list
**CMakeLists.txt:**
- Disabled incomplete cube_sphere_effect.cc (ShaderToy conversion WIP)
**Timeline:**
- Commented out incomplete CubeSphereEffect
- Removed obsolete constructor argument
Fixes #issue-with-effect-syntax
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
|
|
Add automated conversion pipeline for ShaderToy shaders to demo effects:
- convert_shadertoy.py: Automated code generation script
- Manual templates: Header, implementation, and WGSL boilerplate
- Example shader: Test case for conversion workflow
- README: Complete conversion guide with examples
Handles basic GLSL→WGSL conversion (types, uniforms, mainImage extraction).
Manual fixes needed for fragColor returns and complex type inference.
Organized under tools/shadertoy/ for maintainability.
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
|