summaryrefslogtreecommitdiff
path: root/cnn_v3/training/train_cnn_v3.py
AgeCommit message (Collapse)Author
12 hoursfeat(cnn_v3): 3×3 dilated bottleneck + Sobel loss + FiLM warmup + ↵skal
architecture PNG - Replace 1×1 pointwise bottleneck with Conv(8→8, 3×3, dilation=2): effective RF grows from ~13px to ~29px at ¼res (~+1 KB weights) - Add Sobel edge loss in training (--edge-loss-weight, default 0.1) - Add FiLM 2-phase training: freeze MLP for warmup epochs then unfreeze at lr×0.1 (--film-warmup-epochs, default 50) - Update weight layout: BN 72→584 f16, total 1964→2476 f16 (4952 B) - Cascade offsets in C++ effect, JS tool, export/gen_test_vectors scripts - Regenerate test_vectors.h (1238 u32); parity max_err=9.77e-04 - Generate dark-theme U-Net+FiLM architecture PNG (gen_architecture_png.py) - Replace ASCII art in CNN_V3.md and HOW_TO_CNN.md with PNG embed handoff(Gemini): bottleneck dilation + Sobel loss + FiLM warmup landed. Next: run first real training pass (see cnn_v3/docs/HOWTO.md §3).
13 hoursfeat(cnn_v3/training): add --single-sample option + doc fixesskal
- train_cnn_v3.py: --single-sample <dir> implies --full-image + --batch-size 1 - cnn_v3_utils.py: CNNv3Dataset accepts single_sample= kwarg (explicit override) - HOWTO.md: document --single-sample workflow, fix pack_photo_sample.py usage (--target required) - HOW_TO_CNN.md: fix GBufferEffect seq input (prev_cnn→source), fix binary name (demo→demo64k), add --resume to flag table, remove stale "pack without target" block handoff(Gemini): --single-sample <dir> added to train_cnn_v3.py; docs audited and corrected
3 daysfeat(cnn_v3): patch alignment search, resume, Ctrl-C saveskal
- --patch-search-window N: at dataset init, find per-patch (dx,dy) in [-N,N]² that minimises grayscale MSE between source albedo and target; result cached so __getitem__ pays only a list-lookup per sample. - --resume [CKPT]: restore model + Adam state from a checkpoint; omit path to auto-select the latest in --checkpoint-dir. - Ctrl-C (SIGINT) finishes the current batch, then saves a checkpoint before exiting; finally-block guarded so no spurious epoch-0 save. - Review: remove unused sd variable, lift patch_idx out of duplicate computation, move _LUMA to Constants block, update module docstring. handoff(Gemini): cnn_v3/training updated — no C++ or test changes.
3 daysdocs(cnn_v3): add uv inline deps to train_cnn_v3.py + HOW_TO_CNN noteskal
handoff(Gemini): train_cnn_v3.py now has uv script metadata block (torch, torchvision, numpy, pillow, opencv-python). HOW_TO_CNN §2 Prerequisites updated with uv quick-start alternative.
3 daysfix(cnn_v3): correct weight budget in docstring (3.9→5.4 KB f16)skal
4 daysfeat(cnn_v3): Phase 6 — training script (train_cnn_v3.py + cnn_v3_utils.py)skal
- train_cnn_v3.py: CNNv3 U-Net+FiLM model, training loop, CLI - cnn_v3_utils.py: image I/O, pyrdown, depth_gradient, assemble_features, apply_channel_dropout, detect_salient_points, CNNv3Dataset - Patch-based training (default 64×64) with salient-point extraction (harris/shi-tomasi/fast/gradient/random detectors, pre-cached at init) - Channel dropout for geometric/context/temporal channels - Random FiLM conditioning per sample for joint MLP+U-Net training - docs: HOWTO.md §3 updated with commands and flag reference - TODO.md: Phase 6 marked done, export script noted as next step Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>