diff options
| author | skal <pascal.massimino@gmail.com> | 2026-02-10 20:00:26 +0100 |
|---|---|---|
| committer | skal <pascal.massimino@gmail.com> | 2026-02-10 20:00:26 +0100 |
| commit | 2a2369e38fbe1bf8261968dafc88dac73bdda7ce (patch) | |
| tree | bf4505ca53e501af9dab61b172ecc60f3671d79c /assets/common/shaders/compute/gen_grid.wgsl | |
| parent | 3153b55135788c8a7691929f913a2c9b96a44154 (diff) | |
fix: CNN training normalization pipeline consistency
**Training changes:**
- Final layer now outputs [0,1] directly with torch.clamp()
- Removed denormalization step (was converting [-1,1] to [0,1])
- Network learns [0,1] output natively
**Shader generation fixes:**
- Layer 0 uses _src variant (5 params, normalizes [0,1] input internally)
- Removed pre-normalization of input texture (handled by _src)
- Final layer blending: gray_out already [0,1], no denormalization needed
- Added generate_conv_src_function() for all kernel sizes
- Auto-generates _src variants when exporting (skips if exists)
**Cleanup:**
- Removed obsolete 4-channel functions from cnn_conv5x5.wgsl
- Keep only 7-channel variants (_7to4, _7to1, _7to4_src)
**Normalization flow:**
[0,1] texture → _src normalizes to [-1,1] → tanh [-1,1] → ... → final conv [0,1] clipped
handoff(Claude): CNN normalization pipeline fixed and consistent with training
Diffstat (limited to 'assets/common/shaders/compute/gen_grid.wgsl')
0 files changed, 0 insertions, 0 deletions
