summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
13 hoursUpdate docs: CNN v2 sigmoid activation summaryskal
- PROJECT_CONTEXT.md: Updated Effects section (sigmoid, stable training) - TODO.md: Added sigmoid activation to CNN v2 status - CNN_V2.md: Streamlined (removed outdated issues, updated code examples) handoff(Claude): Documentation synchronized with sigmoid implementation.
13 hoursReplace hard clamp with sigmoid activation in CNN v2skal
Fixes training collapse where p1/p2 channels saturate due to gradient blocking at clamp boundaries. Sigmoid provides smooth [0,1] mapping with continuous gradients. Changes: - Layer 0: clamp(x, 0, 1) → sigmoid(x) - Final layer: clamp(x, 0, 1) → sigmoid(x) - Middle layers: ReLU unchanged (already stable) Updated files: - training/train_cnn_v2.py: PyTorch model activations - workspaces/main/shaders/cnn_v2/cnn_v2_compute.wgsl: WGSL shader - tools/cnn_v2_test/index.html: HTML validation tool - doc/CNN_V2.md: Documentation Validation: - Build clean (no shader errors) - 34/36 tests pass (2 unrelated script tests fail) - 10-epoch training: loss 0.153 → 0.088 (good convergence) - cnn_test processes images successfully Breaking change: Old checkpoints trained with clamp() incompatible. Retrain from scratch required. handoff(Claude): CNN v2 sigmoid activation implemented and validated.
13 hoursAdd --lr parameter to CNN v2 training pipelineskal
Support custom learning rate in train_cnn_v2_full.sh (default: 1e-3). Usage: ./scripts/train_cnn_v2_full.sh --lr 1e-4 handoff(Claude): Added --lr flag to training wrapper script Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
14 hoursDocument CNN v2 training pipeline improvementsskal
- HOWTO.md: Document always-save-checkpoint behavior and --quiet flag - COMPLETED.md: Add milestone entry for Feb 14 CNN v2 fixes - Details: checkpoint saving, num_layers derivation, output streamlining Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
14 hoursStreamline CNN v2 training pipeline outputskal
- Add --quiet flag to export script (single-line summary) - Compact validation output (all images on one line) - Reduce noise: export 3 layers, 912 weights, 1904 bytes Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
14 hoursFix CNN v2 training: always save final checkpoint, derive num_layersskal
- Always save final checkpoint after training completes - Derive num_layers from kernel_sizes list when multiple values provided - Add checkpoint validation in training pipeline script - Quote shell variables when passing args to Python Fixes issue where no checkpoint saved when epochs < checkpoint_every. Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
14 hoursAdd --output-weights option to CNN v2 training pipelineskal
- train_cnn_v2_full.sh: Support custom output path via --output-weights - Pass weights path to export and validation stages - Update HOWTO.md: Add rapid debug example (1 layer, 5 epochs) Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
14 hoursFix --mix option: blend prev layer with static p4-p7, not p0-p3skal
Updated gen_identity_weights.py --mix mode to use static features p4-p7 (uv_x, uv_y, sin20_y, bias) at channels 8-11 instead of p0-p3 (RGB+D) at channels 4-7. Before: 0.5*prev[i] + 0.5*static_p{i} (channels 4-7) After: 0.5*prev[i] + 0.5*static_p{4+i} (channels 8-11) Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
14 hoursFix CNN v2 static feature channel mapping (p4-p7 → channels 8-11)skal
Fixed bug in gen_identity_weights.py --p47 mode: static features p4-p7 (uv_x, uv_y, sin20_y, bias) are at input channels 8-11, not 4-7. Weight tensor layout: - Channels 0-3: Previous layer output (4D RGBA) - Channels 4-11: Static features (8D: p0-p7) Static features: - p0-p3 (channels 4-7): RGB+D from mip level - p4-p7 (channels 8-11): uv_x, uv_y, sin20_y, bias Updated: - training/gen_identity_weights.py: Change weights[i,i+4] to weights[i,i+8] - workspaces/main/weights/mix_p47.bin: Regenerated (not in repo) - doc/CNN_V2.md: Add Input Channel Mapping section with full layout table Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
14 hoursCNN v2 web tool: Remove vizScale, always show raw layer valuesskal
Always use vizScale=1.0 for all layers. Shader clips to [0,1] for display. Shows exact layer output values without artificial dimming. Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
14 hoursCNN v2 web tool: Fix vizScale for final layer previewskal
Final layer output is clamped [0,1] and should use vizScale=1.0 like static features, not 0.5 like middle layers (unbounded ReLU). Before: All layers except static used 0.5 (too dark) After: Static + final layer use 1.0, middle layers use 0.5 Fixes brightness mismatch between big preview and thumbnails. Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
14 hoursgen_identity_weights: Change --mix to 50-50 blendskal
Updates --mix mode to use 50-50 weighting to avoid overflow: - Before: p0+p4, p1+p5, p2+p6, p3+p7 - After: 0.5*p0+0.5*p4, 0.5*p1+0.5*p5, etc Prevents saturation when blending input with static features. Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
14 hoursgen_identity_weights: Add --p47 option for static feature visualizationskal
Adds --p47 flag to output static features directly: - p4 → ch0 (UV.x) - p5 → ch1 (UV.y) - p6 → ch2 (sin encoding) - p7 → ch3 (bias) Useful for visualizing static feature generation without input RGBA. Updated doc/CNN_V2_DEBUG_TOOLS.md with --p47 usage. Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
15 hoursgen_identity_weights: Add --mix option for static feature blendingskal
Adds --mix flag to blend input channels with static features: - p0+p4 → p0 (RGBA + UV.x) - p1+p5 → p1 (RGBA + UV.y) - p2+p6 → p2 (RGBA + sin encoding) - p3+p7 → p3 (RGBA + bias) Useful for debugging static feature contribution in CNN v2. Updated doc/CNN_V2_DEBUG_TOOLS.md with --mix usage examples. Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
15 hoursCNN v2 tool: Fix off-by-one in composited layer filenamesskal
currentLayerIdx indexes layerOutputs array (0=Static Features, 1=Layer 0). Filename should use layer number, not array index. Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
15 hoursCNN v2: Fix weight buffer offset bugskal
Root cause: Binary format is [header:20B][layer_info:20B×N][weights]. Both cnn_test and CNNv2Effect uploaded entire file to weights_buffer, but shader reads weights_buffer[0] expecting first weight, not header. Fix: Skip header + layer_info when uploading to GPU buffer. - cnn_test.cc: Calculate weights_offset, upload only weights section - cnn_v2_effect.cc: Same fix for runtime effect Before: layer_0 output showed [R, uv_x, uv_y, black] (wrong channels) After: layer_0 output shows [R, G, B, D] (correct identity mapping) Tests: 34/36 passing (2 unrelated failures)
15 hourscnn_test: --weights now overrides layer config from .bin fileskal
When using --weights option: - Layer count and kernel sizes loaded from binary header - Warnings shown if --layers or --cnn-version specified - Help text clarifies precedence order - Binary weights always take precedence over CLI args Updated documentation: - doc/CNN_TEST_TOOL.md: Usage examples with --weights - doc/HOWTO.md: Runtime weight loading example handoff(Claude): cnn_test --weights config override
15 hourscnn_test: Add --weights option for runtime weight loadingskal
Enables testing different CNN v2 weight files without rebuilding. Automatically forces CNN v2 when --weights is specified, with warning if --cnn-version conflicts. Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
16 hoursCNN v2: Remove vizScale, always clip to [0,1]skal
All layers now use scale 1.0, shader clamps values >1. Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
16 hoursCNN v2: Fix Layer 0 visualization scale (was 0.5, now 1.0)skal
Layer 0 output is clamped [0,1], does not need 0.5 dimming. Middle layers (ReLU) keep 0.5 scale for values >1. Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
16 hoursCNN v2: Add debugging tools for mismatch investigationskal
Add identity weight generator and composited layer save for debugging HTML/C++ output differences. Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
16 hoursCNN v2 training: Fix float64/float32 dtype mismatch in depth featureskal
Cast depth array to float32 when provided, preventing torch Double/Float dtype mismatch during forward pass. Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
16 hoursCNN v2: Alpha channel depth handling and layer visualizationskal
Training changes: - Changed p3 default depth from 0.0 to 1.0 (far plane semantics) - Extract depth from target alpha channel in both datasets - Consistent alpha-as-depth across training/validation Test tool enhancements (cnn_test): - Added load_depth_from_alpha() for R32Float depth texture - Fixed bind group layout for UnfilterableFloat sampling - Added --save-intermediates with per-channel grayscale composites - Each layer saved as 4x wide PNG (p0-p3 stacked horizontally) - Global layers_composite.png for vertical layer stack overview Investigation notes: - Static features p4-p7 ARE computed and bound correctly - Sin_20_y pattern visibility difference between tools under investigation - Binary weights timestamp (Feb 13 20:36) vs HTML tool (Feb 13 22:12) - Next: Update HTML tool with canonical binary weights handoff(Claude): HTML tool weights update pending - base64 encoded canonical weights ready in /tmp/weights_b64.txt for line 392 replacement. Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
17 hoursCNN v2: Use alpha channel for p3 depth feature + layer visualizationskal
Training changes (train_cnn_v2.py): - p3 now uses target image alpha channel (depth proxy for 2D images) - Default changed from 0.0 → 1.0 (far plane semantics) - Both PatchDataset and ImagePairDataset updated Test tools (cnn_test.cc): - New load_depth_from_alpha() extracts PNG alpha → p3 texture - Fixed bind group layout: use UnfilterableFloat for R32Float depth - Added --save-intermediates support for CNN v2: * Each layer_N.png shows 4 channels horizontally (1812×345 grayscale) * layers_composite.png stacks all layers vertically (1812×1380) * static_features.png shows 4 feature channels horizontally - Per-channel visualization enables debugging layer-by-layer differences HTML tool (index.html): - Extract alpha channel from input image → depth texture - Matches training data distribution for validation Note: Current weights trained with p3=0 are now mismatched. Both tools use p3=alpha consistently, so outputs remain comparable for debugging. Retrain required for optimal quality. Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
17 hoursCNN v2 web tool: Document embedded default weightsskal
Add documentation for DEFAULT_WEIGHTS_B64 constant: - Current config: 4 layers, mip_level=2 - Update procedure: base64 encode and replace Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
17 hoursCNN v2 web tool: Update embedded weights to current checkpointskal
Replaces v1 weights (3 layers) with v2 weights from workspaces/main/weights/cnn_v2_weights.bin: - 4 layers: 3×3, 5×5, 3×3, 3×3 - 2496 f16 weights - mip_level=2 Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
18 hoursCNN v2 web tool: Multiple fixes for feature parity with cnn_testskal
Changes: - Static shader: Point sampler (nearest filter) instead of linear - Mip handling: Use textureSampleLevel with point sampler (fixes coordinate scaling) - Save PNG: GPU readback via staging buffer (WebGPU canvas lacks toBlob support) - Depth binding: Use input texture as depth (matches C++ simplification) - Header offset: Version-aware calculation (v1=4, v2=5 u32) Known issue: Output still differs from cnn_test (color tones). Root cause TBD. Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
19 hoursCNN v2 web tool: Fix static features shader sampling and header offsetskal
Root cause: HTML tool was producing incorrect output vs cnn_test due to: 1. Linear filtering: textureSampleLevel() with sampler blurred p0-p3 features 2. Header offset bug: Used 4 u32 instead of 5 u32 for version 2 binary format Changes: - Static shader: Replace textureSampleLevel (linear) with textureLoad (point) - Bind group: Use 3 separate mip views instead of sampler - Header offset: Account for version-specific header size (v1=4, v2=5 u32) - Add version field to weights object for correct offset calculation - Add savePNG button for convenience Result: HTML output now matches cnn_test output exactly. Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
19 hoursCNN v2 web tool: Enhance UI visibility and layer preview interactionskal
Improve drop zone visibility with larger borders, bold blue text, and brighter hover states for better user guidance. Replace hover-based zoom with click-to-preview: clicking any of the 4 small channel views displays it large below. Active channel highlighted with white border for clear visual feedback. Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
19 hoursCNN v2 training: Refactor train_cnn_v2_full.sh for maintainabilityskal
- Add helper functions: export_weights(), find_latest_checkpoint(), build_target() - Eliminate duplicate export logic (3 instances → 1 function) - Eliminate duplicate checkpoint finding (2 instances → 1 function) - Consolidate build commands (4 instances → 1 function) - Simplify optional flags with inline command substitution - Fix validation mode: correct cnn_test argument order (positional args before --cnn-version) - 30 fewer lines, improved readability handoff(Claude): Refactored CNN v2 training script, fixed validation bug
19 hoursCNN test tool: Add CNN v2 support with compute shader architectureskal
Implement full CNN v2 support for offline validation: - Add --cnn-version flag (1=render pipeline, 2=compute shader) - Load binary weights from storage buffer (~3-5 KB) - Static features compute pass (7D: RGBD + UV + sin + bias) - Dynamic layer count from binary header - RGBA32Uint texture readback with f16→u8 conversion - Custom f16 decoder (handles denormals, infinity, NaN) Status: - CNN v1: Produces incorrect output (all white) - CNN v2: ✅ Fully functional, matches CNNv2Effect Updated docs: - doc/CNN_TEST_TOOL.md: Architecture, usage, validation workflow - doc/HOWTO.md: Recommend v2 for validation Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
22 hoursCNN v2 training: Add --grayscale-loss option for luminance-based loss ↵skal
computation Add option to compute loss on grayscale (Y = 0.299*R + 0.587*G + 0.114*B) instead of full RGBA channels. Useful for training models that prioritize luminance accuracy over color accuracy. Changes: - training/train_cnn_v2.py: Add --grayscale-loss flag and grayscale conversion in loss computation - scripts/train_cnn_v2_full.sh: Add --grayscale-loss parameter support - doc/CNN_V2.md: Document grayscale loss in training configuration and checkpoint format - doc/HOWTO.md: Add usage examples for --grayscale-loss flag Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
22 hoursCNN v2: Fix WebGPU validation error in uniform buffer alignmentskal
Fix two issues causing validation errors in test_demo: 1. Remove redundant pipeline creation without layout (static_pipeline_) 2. Change vec3<u32> to 3× u32 fields in StaticFeatureParams struct WGSL vec3<u32> aligns to 16 bytes (std140), making struct 32 bytes, while C++ struct was 16 bytes. Explicit fields ensure consistent layout. Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
22 hoursCNN v2 training: Expose all parameters as CLI optionsskal
Expose all hardcoded parameters in train_cnn_v2_full.sh: - Training: epochs, batch-size, checkpoint-every, kernel-sizes, num-layers, mip-level - Patches: patch-size, patches-per-image, detector, full-image, image-size - Directories: input, target, checkpoint-dir, validation-dir Update --help with organized sections (modes, training, patches, directories). Update doc/HOWTO.md with usage examples. Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
22 hoursCNN v2: Change feature #6 from sin(10*x) to sin(20*y)skal
Update positional encoding to use vertical coordinate at higher frequency. Changes: - train_cnn_v2.py: sin10_x → sin20_y (computed from uv_y) - cnn_v2_static.wgsl: sin10_x → sin20_y (computed from uv_y) - index.html: sin10_x → sin20_y (STATIC_SHADER) - CNN_V2.md: Update feature descriptions and examples - CNN_V2_BINARY_FORMAT.md: Update static features documentation Feature vector: [p0, p1, p2, p3, uv_x, uv_y, sin20_y, bias] Rationale: Higher frequency (20 vs 10) + vertical axis provides better spatial discrimination for position encoding. Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
22 hoursCNN v2: Add TODO for flexible feature layout in binary format v3skal
Document future enhancement for arbitrary feature vector layouts. Proposed feature descriptor in binary format v3: - Specify feature types, sources, and ordering - Enable runtime experimentation without shader recompilation - Examples: [R,G,B,dx,dy,uv_x,bias] or [mip1.r,mip2.g,laplacian,uv_x,sin20_x,bias] Added TODOs in: - CNN_V2_BINARY_FORMAT.md: Detailed proposal with struct layout - CNN_V2.md: Future extensions section - train_cnn_v2.py: compute_static_features() docstring - cnn_v2_static.wgsl: Shader header comment - cnn_v2_effect.cc: Version check comment Current limitation: Hardcoded [p0,p1,p2,p3,uv_x,uv_y,sin10_x,bias] layout. Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
22 hoursDoc: Update CNN v2 docs for binary format v2 and mip-level supportskal
Updated documentation to reflect binary format v2 with mip_level field. Changes: - CNN_V2_BINARY_FORMAT.md: Document v2 (20-byte header) with mip_level, v1 backward compat - CNN_V2_WEB_TOOL.md: Document auto-detection of mip_level, UI updates - CNN_V2.md: Update overview with mip-level feature, training pipeline Binary format v2: - Header: 20 bytes (was 16) - New field: mip_level (u32) at offset 0x10 - Backward compatible: v1 loaders treat as mip_level=0 Documentation complete for full mip-level pipeline integration. Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
22 hoursCNN v2 HTML tool: Support binary format v2 with mip_levelskal
Parse v2 header (20 bytes) and read mip_level field. Display mip_level in metadata panel, set UI dropdown on load. Changes: - parseWeights(): Handle v1 (16-byte) and v2 (20-byte) headers - Read mip_level from header[4] for version 2 - Return mipLevel in parsed weights object - updateWeightsPanel(): Display mip level in metadata - loadWeights(): Set this.mipLevel and update UI dropdown Backward compatible: v1 weights → mipLevel=0 Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
23 hoursCNN v2: Add mip-level support to runtime effectskal
Binary format v2 includes mip_level in header (20 bytes, was 16). Effect reads mip_level and passes to static features shader via uniform. Shader samples from correct mip texture based on mip_level. Changes: - export_cnn_v2_weights.py: Header v2 with mip_level field - cnn_v2_effect.h: Add StaticFeatureParams, mip_level member, params buffer - cnn_v2_effect.cc: Read mip_level from weights, create/bind params buffer, update per-frame - cnn_v2_static.wgsl: Accept params uniform, sample from selected mip level Binary format v2: - Header: 20 bytes (magic, version=2, num_layers, total_weights, mip_level) - Backward compatible: v1 weights load with mip_level=0 Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
23 hoursDoc: Update HOWTO.md with --mip-level example for full pipelineskal
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
23 hoursCNN v2 full pipeline: Add --mip-level optionskal
Training pipeline now accepts --mip-level flag (0-3) and passes to train_cnn_v2.py. Compatible with all existing modes (train, validate, export-only). Changes: - Add --mip-level argument parsing (default: 0) - Pass MIP_LEVEL to training command - Display mip level in config output - Update help text with examples Usage: ./scripts/train_cnn_v2_full.sh --mip-level 1 Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
23 hoursCNN v2 export: Read and display mip_level from checkpointsskal
Export scripts now read mip_level from checkpoint config and display it. Shader generator includes mip level in generated comments. Changes: - export_cnn_v2_weights.py: Read mip_level, print in config - export_cnn_v2_shader.py: Read mip_level, pass to shader gen, add to comments Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
23 hoursCNN v2: Add --mip-level option for parametric featuresskal
Add mip level control for p0-p3 features (0=original, 1=half, 2=quarter, 3=eighth). Uses pyrDown/pyrUp for proper Gaussian filtering during mip generation. Changes: - compute_static_features(): Accept mip_level param, generate mip via cv2 pyramid - PatchDataset/ImagePairDataset: Pass mip_level to feature computation - CLI: Add --mip-level arg with choices [0,1,2,3] - Save mip_level in checkpoint config for tracking - Doc updates: HOWTO.md and CNN_V2.md Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
23 hoursCNN v2 test tool: Refactoring and video loop supportskal
Refactoring: - Extract FULLSCREEN_QUAD_VS shader (reused in mipmap, display, layer viz) - Add helper methods: getDimensions(), setVideoControlsEnabled() - Add section headers and improve code organization (~40 lines saved) - Move Mip Level selector to bottom of left sidebar - Remove "Features (p0-p3)" panel header Features: - Add video loop support (continuous playback) Documentation: - Update CNN_V2_WEB_TOOL.md with latest changes - Document refactoring benefits and code organization - Update UI layout section with current structure Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
23 hoursCNN v2 test tool: Add mip level selector for p0-p3 featuresskal
Add dropdown menu in left panel to select mip levels 0-2 for parametric features (p0-p3/RGBD). Uses trilinear filtering for smooth downsampling at higher mip levels. Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
23 hoursCNN v2 test tool: Embed default weights for instant startupskal
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
23 hoursCNN v2: Fix activation function mismatch between training and inferenceskal
Layer 0 now uses clamp [0,1] in both training and inference (was using ReLU in shaders). - index.html: Add is_layer_0 flag to LayerParams, handle Layer 0 separately - export_cnn_v2_shader.py: Generate correct activation for Layer 0 Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
23 hoursCNN v2 test tool: UI improvements and video playback fixesskal
- Change Depth control from number input to slider (0-1 range) - Move video controls to floating overlay at top of canvas - Remove View mode indicator from header (shortcuts still work) - Remove scrollbar from Layer Visualization panel - Fix layer viz flickering during video playback - Fix video controls responsiveness during playback Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
23 hoursCNN v2 test tool: Add video playback supportskal
Features: - Video file support (MP4, WebM, etc.) via drag-and-drop - Play/Pause button with non-realtime playback (drops frames if CNN slow) - Frame-by-frame navigation (◄/► step buttons) - Unified image/video processing through same CNN pipeline - Audio muted (video frames only) Optimizations: - Layer visualization updates only on pause/seek (~5-10ms saved per frame) Architecture: - copyExternalImageToTexture() works with both ImageBitmap and HTMLVideoElement - Video loading: wait for metadata → seek to frame 0 → wait for readyState≥2 (decoded) - Playback loop: requestAnimationFrame with isProcessing guard prevents overlapping inference - Controls always visible, disabled for images Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
24 hoursCNN v2 web tool: Major UI redesign with three-panel layoutskal
UI Changes: - Three-panel layout: left (weights), center (canvas), right (activations) - Left sidebar: clickable weights drop zone, weights info, kernel visualization - Right sidebar: 4 small activation views + large 4× zoom view - Controls moved to header (inline with title) Weights Visualization: - Dedicated panel in left sidebar with layer buttons - 1 pixel per weight (was 20px) - All input channels horizontal, output channels stacked vertically - Renders to separate canvas (not in activation grid) Activation Viewer: - 4 channels in horizontal row (was 2×2 grid) - Mouse-driven zoom view below (32×32 area at 4× magnification) - Zoom shows all 4 channels in 2×2 quadrant layout - Removed activations/weights mode toggle State Preservation: - Blend changes preserve selected layer/channel - Fixed activation view reset bug Documentation: - Updated README with new layout and feature descriptions - Marked implemented features (weights viz, layer viewer) - Updated size estimates (~22 KB total) Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>