# CNN v3 Enhanced CNN post-processing with next-generation features. ## Directory Structure ``` cnn_v3/ ├── docs/ # Documentation and design notes ├── scripts/ # Training and build automation scripts ├── shaders/ # WGSL compute shaders ├── src/ # C++ implementation ├── tools/ # Testing and validation tools ├── training/ # Training pipeline │ ├── input/ # Source images for training │ ├── target_1/ # Style 1 target images │ └── target_2/ # Style 2 target images └── weights/ # Trained model weights (binary format) ``` ## Training Data Training images are tracked in the repository: - `training/input/` - Original input images - `training/target_1/` - First style transformation targets - `training/target_2/` - Second style transformation targets Multiple target directories allow training different stylistic transformations from the same input set. Add images directly to these directories and commit them. ## Status **Phases 1–7 complete.** 36/36 tests pass. | Phase | Status | |-------|--------| | 1 — G-buffer (raster + pack) | ✅ | | 2 — Training infrastructure | ✅ | | 3 — WGSL U-Net shaders | ✅ | | 4 — C++ CNNv3Effect + FiLM | ✅ | | 5 — Parity validation | ✅ max_err=4.88e-4 | | 6 — Training script | ✅ train_cnn_v3.py | | 7 — Validation tools | ✅ GBufViewEffect + web sample loader | See `cnn_v3/docs/HOWTO.md` for the practical playbook (§9 covers validation tools). See `cnn_v3/docs/CNN_V3.md` for full design. See `cnn_v2/` for reference implementation.