summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
11 hoursfix(cnn_v3/tools): fix getElementById illegal invocation + auto-preload weightsskal
- Bind document.getElementById in filmParams() to fix 'Illegal invocation' error - Add preload() that auto-fetches cnn_v3_weights.bin + cnn_v3_film_mlp.bin from workspaces/main/weights/ on init; skips silently if files not found - Add '↺ Reload from workspace weights/' button for manual re-fetch handoff(Gemini): cnn_v3 web tool fixed; serve from repo root via http.server
11 hoursnew timelineskal
11 hoursfix(cnn_v3): fix texture format mismatches in cnn_v3_test sequenceskal
- seq_compiler: add gbuf_albedo/gbuf_rgba32uint to NODE_TYPES - timeline: declare gbuf_feat0/feat1 as gbuf_rgba32uint, route CNNv3Effect output through cnn_v3_out (gbuf_albedo) + Passthrough to sink (dec0 can't write directly to Rgba8Unorm sink) - cnn_v3_effect: fix update_bind_groups using .set() instead of .replace() causing FATAL assert on second frame - TODO: add CNN v3 "2D mode" (G-buffer-free) future task handoff(Gemini): CNNv3Effect now runs without crashes at --seek 48
11 hoursfix(tools): fetch stb_image_write.h in project_init.sh, fix cnn_test include ↵skal
path
11 hoursfeat(cnn_v3): wire trained weights into CNNv3Effect + add timeline test sequenceskal
- CNNv3Effect constructor loads ASSET_WEIGHTS_CNN_V3 via GetAsset on startup - seq_compiler.py: CLASS_TO_HEADER supports full #include paths for cnn_v3/ classes - timeline.seq: add cnn_v3_test sequence at 48s (GBufferEffect → CNNv3Effect) - test_cnn_v3_parity: zero_weights test now explicitly uploads zeros to override asset handoff(Gemini): CNNv3Effect ready; export weights to workspaces/main/weights/ and seek to 48s to test
11 hoursfeat(cnn_v3): add weight assets to assets.txt, update HOW_TO_CNN export docsskal
- Add WEIGHTS_CNN_V3 and WEIGHTS_CNN_V3_FILM_MLP to workspaces/main/assets.txt - Add opencv-python and pillow to export_cnn_v3_weights.py uv inline deps - Update HOW_TO_CNN.md §3 export target → workspaces/main/weights/ - Update HOW_TO_CNN.md §4 weight loading → SafeGetAsset (asset system) handoff(Gemini): cnn_v3 weight assets registered; export and C++ load path documented
12 hoursdocs(cnn_v3): add uv inline deps to export_cnn_v3_weights.pyskal
12 hoursfix sizeskal
https://colorifyai.art/photo-to-sketch/#playground with 'Ink Sketch' does a small zooming and resize
12 hoursfeat(cnn_v3): patch alignment search, resume, Ctrl-C saveskal
- --patch-search-window N: at dataset init, find per-patch (dx,dy) in [-N,N]² that minimises grayscale MSE between source albedo and target; result cached so __getitem__ pays only a list-lookup per sample. - --resume [CKPT]: restore model + Adam state from a checkpoint; omit path to auto-select the latest in --checkpoint-dir. - Ctrl-C (SIGINT) finishes the current batch, then saves a checkpoint before exiting; finally-block guarded so no spurious epoch-0 save. - Review: remove unused sd variable, lift patch_idx out of duplicate computation, move _LUMA to Constants block, update module docstring. handoff(Gemini): cnn_v3/training updated — no C++ or test changes.
12 hoursnormalize sample dimensionskal
13 hoursdocs(cnn_v3): add uv inline deps to train_cnn_v3.py + HOW_TO_CNN noteskal
handoff(Gemini): train_cnn_v3.py now has uv script metadata block (torch, torchvision, numpy, pillow, opencv-python). HOW_TO_CNN §2 Prerequisites updated with uv quick-start alternative.
13 hoursperf(cnn_v3): cache dataset images at init to avoid per-patch disk I/Oskal
handoff(Gemini): CNNv3Dataset now loads all samples once in __init__ into self._cache; __getitem__ reads from cache instead of reloading PNGs each call. Eliminates N×patches_per_image file loads per epoch.
13 hoursdocs(cnn_v3): add Windows 10 + CUDA training section to HOW_TO_CNN §2skal
13 hoursfix(cnn_v3): correct weight budget in docstring (3.9→5.4 KB f16)skal
13 hoursfix(cnn_v3): resize target to albedo dims when sizes differskal
target.png can have a different resolution than albedo.png in simple samples; patch slicing into the smaller target produced 0×0 tensors, crashing torch.stack in the DataLoader collate. handoff(Gemini): target resized in _load_sample (LANCZOS) + note in HOW_TO_CNN §1c.
13 hoursdocs(cnn_v3): add full Old House example to HOW_TO_CNN §1bskal
handoff(Gemini): added render + batch-pack example commands at end of section 1b
14 hoursfix(cnn_v3): native OPEN_EXR_MULTILAYER + quiet render + flexible channel namesskal
blender_export.py: - Replace broken compositor FileOutput approach with native OPEN_EXR_MULTILAYER render output; all enabled passes included automatically, no socket wiring needed - Suppress Fra:/Mem: render spam via os.dup2 fd redirect; per-frame progress printed to stderr via render_post handler pack_blender_sample.py: - get_pass_r: try .R/.X/.Y/.Z/'' suffixes + aliases param for Depth→Z fallback - combined_rgba loaded once via ("Combined","Image") loop; shared by transp+target - Remove unused sys import HOW_TO_CNN.md: update channel table to native EXR naming (Depth.Z, IndexOB.X, Shadow.X), fix example command, note Shadow defaults to 255 when absent handoff(Gemini): blender pipeline now produces correct multilayer EXR with all G-buffer passes; pack script handles native channel naming
14 hoursdocs(cnn_v3): blender4 alias + Blender 4.5 LTS requirement for training dataskal
14 hoursfix(blender_export): version detection + Blender 5.x warning + cleanupskal
- Use bpy.app.version for version detection instead of attribute sniffing - Blender 5.0.x: warn that per-pass compositor routing is broken (Combined only); compositing_node_group path kept ready for when Blender fixes this upstream - Remove all DEBUG prints and failed use_nodes=True experiment - configure_scene() returns only discard_dir (compositor always configured) - Move _SOCKET_ALIASES to module level; simplify slots/None fallback handoff(Gemini): blender_export.py stable for Blender 4.5 LTS; Blender 5.x path is forward-compatible but produces Combined-only output until upstream fix.
15 hoursfix(cnn_v3): blender_export Blender 5 compositor activation + document ↵skal
RenderLayer sockets - Activate compositor in Blender 5.0+ by relying on compositing_node_group assignment (no use_nodes needed, avoids deprecation warning) - Document full CompositorNodeRLayers output socket list for Blender 5.0.1 - Clean up SOCKET_ALIASES to match confirmed socket names
15 hoursfeat(cnn_v3): blender_export print pack_blender_sample.py batch command ↵skal
after render
15 hoursfix(cnn_v3): blender_export fallback socket name aliases for Shadow etc.skal
15 hoursfix(cnn_v3): blender_export discard dir next to --output, not in /tmpskal
15 hoursfix(cnn_v3): blender_export.py Blender 5 File Output node slots + file_nameskal
- Prefer file_output_items over file_slots; use explicit is-None checks so empty collections do not fall through to the legacy attribute. - Clear out_node.file_name so multilayer EXR frames are named 0001.exr instead of file_name0001.exr. handoff(Gemini): blender_export.py now produces frames/0001.exr on Blender 5.0.1.
15 hoursfix(blender_export): only write plane_distance for PLANE objectsskal
16 hoursdocs(cnn_v3): clarify --output is a base dir, not a frame_### patternskal
16 hoursdocs(cnn_v3): update HOW_TO_CNN for Blender 5.x compatibilityskal
16 hoursfix(cnn_v3): blender_export.py Blender 5.x API compatibilityskal
- compositor: use compositing_node_group (Blender 5+) / node_tree (<=4.x) - file output: use file_output_items.new(type, name) (5+) / file_slots (older) - file output: use directory attr (5+) / base_path (older) - suppress default PNG output via mkdtemp + shutil.rmtree after render - link passes by name instead of positional index - add TODO for Shadow socket name variance across blend files - clean up: extract helpers, PASS_SOCKETS constant with socket types handoff(Gemini): blender_export.py now works on Blender 5.0.1
16 hoursfix(cnn_v3): blender_export --view-layer flag + fallback to layer[0]skal
Fixes KeyError when blend file uses a non-default view layer name. Adds --view-layer NAME arg; pass '?' to list available layers. Defaults to index 0 with a clear error if the name is not found. handoff(Gemini): blender_export.py view layer selection now robust
16 hoursfeat(cnn_v3): gen_sample tool + 7 simple training samplesskal
- pack_photo_sample.py: --target now required (no albedo fallback) - gen_sample.py: bash wrapper with positional args (input target output_dir) - input/photo7.jpg: copy of photo2 (second style target) - target_1: photo2_1_out→photo2_out, photo2_2_out→photo7_out - dataset/simple/sample_001..007: 7 packed photo/target pairs handoff(Gemini): training data ready; next step is train_cnn_v3.py run
16 hoursfeat(cnn_v3): gen_sample tool + 7 simple training samplesskal
- pack_photo_sample.py: --target now required (no albedo fallback) - gen_sample.py: bash wrapper with positional args (input target output_dir) - input/photo7.jpg: copy of photo2 (second style target) - target_1: photo2_1_out→photo2_out, photo2_2_out→photo7_out - dataset/simple/sample_001..007: 7 packed photo/target pairs handoff(Gemini): training data ready; next step is train_cnn_v3.py run
16 hoursfeat(cnn_v3): gen_sample tool + 7 simple training samplesskal
- pack_photo_sample.py: --target now required (no albedo fallback) - gen_sample.py: bash wrapper with positional args (input target output_dir) - input/photo7.jpg: copy of photo2 (second style target) - target_1: photo2_1_out→photo2_out, photo2_2_out→photo7_out - dataset/simple/sample_001..007: 7 packed photo/target pairs handoff(Gemini): training data ready; next step is train_cnn_v3.py run
16 hoursfeat(cnn_v3): gen_sample tool + 7 simple training samplesskal
- pack_photo_sample.py: --target now required (no albedo fallback) - gen_sample: bash wrapper with positional args (input target output_dir) - input/photo7.jpg: copy of photo2 (second style target) - target_1: photo2_1_out→photo2_out, photo2_2_out→photo7_out - dataset/simple/sample_001..007: 7 packed photo/target pairs handoff(Gemini): training data ready; next step is train_cnn_v3.py run
34 hoursrefactor(cnn_v3): code review — comments, simplifications, test fixskal
C++: - cnn_v3_effect.cc: fix declare_nodes comment (output node declared by caller) - cnn_v3_effect.cc: add TODO(phase-7) marker for FiLM MLP replacement WGSL: - cnn_v3_bottleneck.wgsl: consolidate _pad fields onto one line, explain why array<u32,3> is invalid in uniform address space - cnn_v3_enc0.wgsl: fix "12xu8" → "12ch u8norm" in header comment - cnn_v3_dec0.wgsl: clarify parity note (sigmoid after FiLM+ReLU, not raw conv) - cnn_v3_common.wgsl: clarify unpack_8ch pack layout (low/high 16 bits) Python: - cnn_v3_utils.py: replace PIL-based _upsample_nearest (uint8 round-trip) with pure numpy index arithmetic; rename _resize_rgb → _resize_img (handles any channel count); add comment on normal zero-pad workaround - export_cnn_v3_weights.py: add cross-ref to cnn_v3_effect.cc constants; clarify weight count comments with Conv notation Test: - test_cnn_v3_parity.cc: enc0/dec1 layer failures now return 0 (were print-only) handoff(Gemini): CNN v3 review complete, 36/36 tests passing.
37 hoursfeat(cnn_v3): HTML WebGPU tool (index.html + shaders.js + tester.js)skal
3-file tool, 939 lines total. Implements full U-Net+FiLM inference in the browser: Pack→Enc0→Enc1→Bottleneck→Dec1→Dec0 compute passes, layer visualisation (Feat/Enc0/Enc1/BN/Dec1/Output), FiLM MLP sliders, drag-drop weights + image/video, Save PNG, diff/blend view modes. HOW_TO_CNN.md §7 updated to reflect tool is implemented. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
38 hoursfeat(cnn_v3): export script + HOW_TO_CNN.md playbookskal
- export_cnn_v3_weights.py: .pth → cnn_v3_weights.bin (f16 packed u32) + cnn_v3_film_mlp.bin (f32) - HOW_TO_CNN.md: full pipeline playbook (data collection, training, export, C++ wiring, parity, HTML tool) - TODO.md: mark export script done Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
38 hoursfeat(cnn_v3): Phase 6 — training script (train_cnn_v3.py + cnn_v3_utils.py)skal
- train_cnn_v3.py: CNNv3 U-Net+FiLM model, training loop, CLI - cnn_v3_utils.py: image I/O, pyrdown, depth_gradient, assemble_features, apply_channel_dropout, detect_salient_points, CNNv3Dataset - Patch-based training (default 64×64) with salient-point extraction (harris/shi-tomasi/fast/gradient/random detectors, pre-cached at init) - Channel dropout for geometric/context/temporal channels - Random FiLM conditioning per sample for joint MLP+U-Net training - docs: HOWTO.md §3 updated with commands and flag reference - TODO.md: Phase 6 marked done, export script noted as next step Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
38 hoursdocs(cnn_v3): update CNN_V3.md + HOWTO.md to reflect Phases 1-5 completeskal
- CNN_V3.md: status line, architecture channel counts (8/16→4/8), FiLM MLP output count (96→40 params), size budget table (real implemented values) - HOWTO.md: Phase status table (5→done, add phase 6 training TODO), sections 3-5 rewritten to reflect what exists vs what is still planned
38 hoursfeat(cnn_v3): Phase 5 complete — parity validation passing (36/36 tests)skal
- Add test_cnn_v3_parity.cc: zero_weights + random_weights tests - Add gen_test_vectors.py: PyTorch reference implementation for enc0/enc1/bn/dec1/dec0 - Add test_vectors.h: generated C header with enc0, dec1, output expected values - Fix declare_nodes(): intermediate textures at fractional resolutions (W/2, W/4) using new NodeRegistry::default_width()/default_height() getters - Add layer-by-layer readback (enc0, dec1) for regression coverage - Final parity: enc0 max_err=1.95e-3, dec1 max_err=1.95e-3, out max_err=4.88e-4 handoff(Claude): CNN v3 parity done. Next: train_cnn_v3.py (FiLM MLP training).
39 hoursdocs: session handoff — CNN v3 Phase 4 completeskal
- TODO.md: mark Phase 4 done, add FiLM MLP training details (blocked on train_cnn_v3.py), clarify what 'real' set_film_params() requires - COMPLETED.md: archive Phase 4 with alignment fix note (vec3u→64/96 bytes) handoff(Gemini): next up CNN v3 Phase 5 (parity validation) or train_cnn_v3.py
39 hoursfeat(cnn_v3): Phase 4 complete — CNNv3Effect C++ + FiLM uniform uploadskal
- cnn_v3/src/cnn_v3_effect.{h,cc}: full Effect subclass with 5 compute passes (enc0→enc1→bottleneck→dec1→dec0), shared weights storage buffer, per-pass uniform buffers, set_film_params() API - Fixed WGSL/C++ struct alignment: vec3u has align=16, so CnnV3Params4ch is 64 bytes and CnnV3ParamsEnc1 is 96 bytes (not 48/80) - Weight offsets computed as explicit formulas (e.g. 20*4*9+4) for clarity - Registered in CMake, shaders.h/cc, demo_effects.h, test_demo_effects.cc - 35/35 tests pass handoff(Gemini): CNN v3 Phase 5 next — parity validation (Python ref vs WGSL)
39 hoursfeat(cnn_v3): Phase 3 complete — WGSL U-Net inference shadersskal
5 compute shaders + cnn_v3/common snippet: enc0: Conv(20→4,3×3) + FiLM + ReLU full-res enc1: AvgPool + Conv(4→8,3×3) + FiLM + ReLU half-res bottleneck: AvgPool + Conv(8→8,1×1) + ReLU quarter-res dec1: NearestUp + cat(enc1) + Conv(16→4) + FiLM half-res dec0: NearestUp + cat(enc0) + Conv(8→4) + FiLM + Sigmoid full-res Parity rules: zero-pad conv, AvgPool down, NearestUp, FiLM after conv+bias, skip=concat, OIHW weights+bias layout. Matches PyTorch train_cnn_v3.py forward() exactly. Registered in workspaces/main/assets.txt + src/effects/shaders.cc. Weight layout + Params struct documented in cnn_v3/docs/HOWTO.md §7. Next: Phase 4 — C++ CNNv3Effect + FiLM uniform upload. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
40 hoursmake the heptagon effect more interestingskal
3 daysfeat(cnn_v3): Phase 1 complete - GBufferEffect integrated + HOWTO playbookskal
- Wire GBufferEffect into demo build: assets.txt, DemoSourceLists.cmake, demo_effects.h, shaders.h/cc. ShaderComposer::Compose() applied to gbuf_raster.wgsl (resolves #include "common_uniforms"). - Add GBufferEffect construction test. 35/35 passing. - Write cnn_v3/docs/HOWTO.md: G-buffer wiring, training data prep, training plan, per-pixel validation workflow, phase status table, troubleshooting guide. - Add project hooks: remind to update HOWTO.md on cnn_v3/ edits; warn on direct str_view(*_wgsl) usage bypassing ShaderComposer. - Update PROJECT_CONTEXT.md and TODO.md: Phase 1 done, Phase 3 (WGSL U-Net shaders) is next active. handoff(Gemini): CNN v3 Phase 3 is next - WGSL enc/dec/bottleneck/FiLM shaders in cnn_v3/shaders/. See cnn_v3/docs/CNN_V3.md Architecture section and cnn_v3/docs/HOWTO.md section 3 for spec. GBufferEffect outputs feat_tex0 + feat_tex1 (rgba32uint, 20ch, 32 bytes/pixel). C++ CNNv3Effect (Phase 4) takes those as input nodes.
3 daysfeat(cnn_v3): G-buffer phase 1 + training infrastructureskal
G-buffer (Phase 1): - Add NodeTypes GBUF_ALBEDO/DEPTH32/R8/RGBA32UINT to NodeRegistry - GBufferEffect: MRT raster pass (albedo+normal_mat+depth) + pack compute - Shaders: gbuf_raster.wgsl (MRT), gbuf_pack.wgsl (feature packing, 32B/px) - Shadow/SDF passes stubbed (placeholder textures), CMake integration deferred Training infrastructure (Phase 2): - blender_export.py: headless EXR export with all G-buffer render passes - pack_blender_sample.py: EXR → per-channel PNGs (oct-normals, 1/z depth) - pack_photo_sample.py: photo → zero-filled G-buffer sample layout handoff(Gemini): G-buffer phases 3-5 remain (U-Net shaders, CNNv3Effect, parity)
3 daysdocs(cnn_v3): full design doc — U-Net + FiLM architecture planskal
- CNN_V3.md: complete design document - U-Net enc_channels=[4,8], ~5 KB f16 weights - FiLM conditioning (5D → γ/β per level, CPU-side MLP) - 20-channel feature buffer, 32 bytes/pixel: two rgba32uint textures - feat_tex0: albedo.rgb, normal.xy, depth, depth_grad.xy (f16) - feat_tex1: mat_id, prev.rgb, mip1.rgb, mip2.rgb, shadow, transp (u8) - 4-pass G-buffer: raster MRT + SDF compute + lighting + pack - Per-pixel parity framework: PyTorch / HTML WebGPU / C++ WebGPU (≤1/255) - Training pipelines: Blender full G-buffer + photo-only (channel dropout) - train_cnn_v3_full.sh spec (modelled on v2 script) - HTML tool adaptation plan from cnn_v2/tools/cnn_v2_test/index.html - Binary format v3 header spec - 8-phase ordered implementation checklist - TODO.md: add CNN v3 U-Net+FiLM future task with phases - cnn_v3/README.md: update status to design phase handoff(Gemini): CNN v3 design complete. Phase 0 (stub G-buffer) unblocks all other phases — one compute shader writing feat_tex0+feat_tex1 with synthetic values from the current framebuffer. See cnn_v3/docs/CNN_V3.md Implementation Checklist.
3 daysdocs: archive stale/completed docs, compact active refs (-1300 lines)skal
- Archive WORKSPACE_SYSTEM.md (completed); replace with 36-line operational ref - Archive SHADER_REUSE_INVESTIGATION.md (implemented Feb 2026) - Archive GPU_PROCEDURAL_PHASE4.md (completed feature) - Archive GEOM_BUFFER.md (ideation only, never implemented) - Archive SPECTRAL_BRUSH_EDITOR.md (v1 DCT approach, superseded by MQ v2) - Update CLAUDE.md Tier 3 refs; point Audio to SPECTRAL_BRUSH_2.md - Update TODO.md Task #5 design link to SPECTRAL_BRUSH_2.md - Update COMPLETED.md archive index handoff(Claude): doc cleanup done, 30 active docs (was 34), -1300 lines
3 dayschore: remove broken seeking test, demote CNN v2 quant to future CNN v3skal
handoff(Gemini): removed test_audio_engine_seeking (broken, not worth fixing); moved CNN v2 8-bit quantization to Future as CNN v3 task.
3 daysadd a commit ruleskal
3 daysdocs(init): add glfw as macOS brew dependencyskal
project_init.sh now checks/installs both wgpu-native and glfw via brew. HOWTO.md documents the macOS prerequisites before the build steps. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>