| Age | Commit message (Collapse) | Author |
|
- Add helper functions: export_weights(), find_latest_checkpoint(), build_target()
- Eliminate duplicate export logic (3 instances → 1 function)
- Eliminate duplicate checkpoint finding (2 instances → 1 function)
- Consolidate build commands (4 instances → 1 function)
- Simplify optional flags with inline command substitution
- Fix validation mode: correct cnn_test argument order (positional args before --cnn-version)
- 30 fewer lines, improved readability
handoff(Claude): Refactored CNN v2 training script, fixed validation bug
|
|
computation
Add option to compute loss on grayscale (Y = 0.299*R + 0.587*G + 0.114*B) instead of full RGBA channels. Useful for training models that prioritize luminance accuracy over color accuracy.
Changes:
- training/train_cnn_v2.py: Add --grayscale-loss flag and grayscale conversion in loss computation
- scripts/train_cnn_v2_full.sh: Add --grayscale-loss parameter support
- doc/CNN_V2.md: Document grayscale loss in training configuration and checkpoint format
- doc/HOWTO.md: Add usage examples for --grayscale-loss flag
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
|
|
Expose all hardcoded parameters in train_cnn_v2_full.sh:
- Training: epochs, batch-size, checkpoint-every, kernel-sizes, num-layers, mip-level
- Patches: patch-size, patches-per-image, detector, full-image, image-size
- Directories: input, target, checkpoint-dir, validation-dir
Update --help with organized sections (modes, training, patches, directories).
Update doc/HOWTO.md with usage examples.
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
|
|
Training pipeline now accepts --mip-level flag (0-3) and passes to train_cnn_v2.py.
Compatible with all existing modes (train, validate, export-only).
Changes:
- Add --mip-level argument parsing (default: 0)
- Pass MIP_LEVEL to training command
- Display mip level in config output
- Update help text with examples
Usage: ./scripts/train_cnn_v2_full.sh --mip-level 1
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
|
|
Allows exporting weights from a checkpoint without training or validation.
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
|
|
Changes:
- KERNEL_SIZES: Use comma-separated values (3,3,3 not 3 3 3)
- Remove --channels (no longer exists in uniform 12D→4D architecture)
- Add --num-layers parameter
- Use export_cnn_v2_weights.py (storage buffer) instead of export_cnn_v2_shader.py
- Fix duplicate export: only export in step 2 (training) or validation mode
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
|
|
Each workspace now has a weights/ directory to store binary weight files
from CNN training (e.g., cnn_v2_weights.bin).
Changes:
- Created workspaces/{main,test}/weights/
- Moved cnn_v2_weights.bin → workspaces/main/weights/
- Updated assets.txt reference
- Updated training scripts and export tool paths
handoff(Claude): Workspace weights/ directories added
|
|
Moved main.cc, stub_main.cc, and test_demo.cc from src/ to src/app/
for better organization. Updated cmake/DemoExecutables.cmake paths.
handoff(Claude): App files reorganized into src/app/ directory
|
|
Updated:
- HOWTO.md: Complete pipeline, storage buffer, --validate mode
- TODO.md: Mark CNN v2 complete, add QAT TODO
- PROJECT_CONTEXT.md: Update Effects status
- CNN_V2.md: Mark complete, add storage buffer notes
- train_cnn_v2_full.sh: Add --help message
All documentation now reflects:
- Storage buffer architecture
- Binary weight format
- Live training progress
- Validation-only mode
- 8-bit quantization TODO
|
|
Usage:
./train_cnn_v2_full.sh --validate [checkpoint.pth]
Skips training and weight export, uses existing weights.
Validates all input images with latest (or specified) checkpoint.
Example:
./train_cnn_v2_full.sh --validate checkpoints/checkpoint_epoch_50.pth
|
|
1. Loss printed at every epoch with \r (no scrolling)
2. Validation only on final epoch (not all checkpoints)
3. Process all input images (not just img_000.png)
Training output now shows live progress with single line update.
|
|
- Add binary weight format (header + layer info + packed f16)
- New export_cnn_v2_weights.py for binary weight export
- Single cnn_v2_compute.wgsl shader with storage buffer
- Load weights in CNNv2Effect::load_weights()
- Create layer compute pipeline with 5 bindings
- Fast training config: 100 epochs, 3×3 kernels, 8→4→4 channels
Next: Complete bind group creation and multi-layer compute execution
|
|
Salient point detection on original images with patch extraction.
Changes:
- Added PatchDataset class (harris/fast/shi-tomasi/gradient detectors)
- Detects salient points on ORIGINAL images (no resize)
- Extracts 32×32 patches around salient points
- Default: 64 patches/image, harris detector
- Batch size: 16 (512 patches per batch)
Training modes:
1. Patch-based (default): --patch-size 32 --patches-per-image 64 --detector harris
2. Full-image (option): --full-image --image-size 256
Benefits:
- Focuses training on interesting regions
- Handles variable image sizes naturally
- Matches CNN v1 workflow
- Better convergence with limited data (8 images → 512 patches)
Script updated:
- train_cnn_v2_full.sh: Patch-based by default
- Configuration exposed for easy switching
Example:
./scripts/train_cnn_v2_full.sh # Patch-based
# Edit script: uncomment FULL_IMAGE for resize mode
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
|
|
Training script now resizes all images to fixed size before batching.
Issue: RuntimeError when batching variable-sized images
- Images had different dimensions (376x626 vs 344x361)
- PyTorch DataLoader requires uniform tensor sizes for batching
Solution:
- Add --image-size parameter (default: 256)
- Resize all images to target_size using LANCZOS interpolation
- Preserves aspect ratio independent training
Changes:
- train_cnn_v2.py: ImagePairDataset now resizes to fixed dimensions
- train_cnn_v2_full.sh: Added IMAGE_SIZE=256 configuration
Tested: 8 image pairs, variable sizes → uniform 256×256 batches
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
|