| Age | Commit message (Collapse) | Author | |
|---|---|---|---|
| 22 hours | TODO: 8-bit weight quantization for 2× size reduction | skal | |
| - Add QAT (quantization-aware training) notes - Requires training with fake quantization - Target: ~1.6 KB weights (vs 3.2 KB f16) - Shader unpacking needs adaptation (4× u8 per u32) | |||
| 22 hours | CNN v2: storage buffer architecture foundation | skal | |
| - Add binary weight format (header + layer info + packed f16) - New export_cnn_v2_weights.py for binary weight export - Single cnn_v2_compute.wgsl shader with storage buffer - Load weights in CNNv2Effect::load_weights() - Create layer compute pipeline with 5 bindings - Fast training config: 100 epochs, 3×3 kernels, 8→4→4 channels Next: Complete bind group creation and multi-layer compute execution | |||
