| Age | Commit message (Collapse) | Author |
|
- --patch-search-window N: at dataset init, find per-patch (dx,dy) in
[-N,N]² that minimises grayscale MSE between source albedo and target;
result cached so __getitem__ pays only a list-lookup per sample.
- --resume [CKPT]: restore model + Adam state from a checkpoint; omit
path to auto-select the latest in --checkpoint-dir.
- Ctrl-C (SIGINT) finishes the current batch, then saves a checkpoint
before exiting; finally-block guarded so no spurious epoch-0 save.
- Review: remove unused sd variable, lift patch_idx out of duplicate
computation, move _LUMA to Constants block, update module docstring.
handoff(Gemini): cnn_v3/training updated — no C++ or test changes.
|
|
handoff(Gemini): train_cnn_v3.py now has uv script metadata block
(torch, torchvision, numpy, pillow, opencv-python). HOW_TO_CNN §2
Prerequisites updated with uv quick-start alternative.
|
|
|
|
- train_cnn_v3.py: CNNv3 U-Net+FiLM model, training loop, CLI
- cnn_v3_utils.py: image I/O, pyrdown, depth_gradient, assemble_features,
apply_channel_dropout, detect_salient_points, CNNv3Dataset
- Patch-based training (default 64×64) with salient-point extraction
(harris/shi-tomasi/fast/gradient/random detectors, pre-cached at init)
- Channel dropout for geometric/context/temporal channels
- Random FiLM conditioning per sample for joint MLP+U-Net training
- docs: HOWTO.md §3 updated with commands and flag reference
- TODO.md: Phase 6 marked done, export script noted as next step
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
|