summaryrefslogtreecommitdiff
path: root/LOG.txt
diff options
context:
space:
mode:
authorskal <pascal.massimino@gmail.com>2026-02-13 23:17:42 +0100
committerskal <pascal.massimino@gmail.com>2026-02-13 23:17:42 +0100
commit6fa9ccf86b0bbefb48cefae19d4162115a3d63d3 (patch)
tree529f68a33d9e4dcc8e473ed604c0bfb6f6f2704f /LOG.txt
parentf81a30d15e1e7db0492f45a0b9bec6aaa20ae5c2 (diff)
CNN v2: Alpha channel depth handling and layer visualization
Training changes: - Changed p3 default depth from 0.0 to 1.0 (far plane semantics) - Extract depth from target alpha channel in both datasets - Consistent alpha-as-depth across training/validation Test tool enhancements (cnn_test): - Added load_depth_from_alpha() for R32Float depth texture - Fixed bind group layout for UnfilterableFloat sampling - Added --save-intermediates with per-channel grayscale composites - Each layer saved as 4x wide PNG (p0-p3 stacked horizontally) - Global layers_composite.png for vertical layer stack overview Investigation notes: - Static features p4-p7 ARE computed and bound correctly - Sin_20_y pattern visibility difference between tools under investigation - Binary weights timestamp (Feb 13 20:36) vs HTML tool (Feb 13 22:12) - Next: Update HTML tool with canonical binary weights handoff(Claude): HTML tool weights update pending - base64 encoded canonical weights ready in /tmp/weights_b64.txt for line 392 replacement. Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Diffstat (limited to 'LOG.txt')
-rw-r--r--LOG.txt43
1 files changed, 0 insertions, 43 deletions
diff --git a/LOG.txt b/LOG.txt
deleted file mode 100644
index 50b77ea..0000000
--- a/LOG.txt
+++ /dev/null
@@ -1,43 +0,0 @@
-=== CNN v2 Complete Training Pipeline ===
-Input: training/input
-Target: training/target_2
-Epochs: 10000
-Checkpoint interval: 500
-
-[1/4] Training CNN v2 model...
-Training on cpu
-Loaded 8 image pairs
-Model: [16, 8, 4] channels, [1, 3, 5] kernels, 3456 weights
-
-Training for 10000 epochs...
-Traceback (most recent call last):
- File "/Users/skal/demo/training/train_cnn_v2.py", line 217, in <module>
- main()
- File "/Users/skal/demo/training/train_cnn_v2.py", line 213, in main
- train(args)
- File "/Users/skal/demo/training/train_cnn_v2.py", line 157, in train
- for static_feat, target in dataloader:
- File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/torch/utils/data/dataloader.py", line 741, in __next__
- data = self._next_data()
- ^^^^^^^^^^^^^^^^^
- File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/torch/utils/data/dataloader.py", line 801, in _next_data
- data = self._dataset_fetcher.fetch(index) # may raise StopIteration
- ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
- File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/torch/utils/data/_utils/fetch.py", line 57, in fetch
- return self.collate_fn(data)
- ^^^^^^^^^^^^^^^^^^^^^
- File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/torch/utils/data/_utils/collate.py", line 401, in default_collate
- return collate(batch, collate_fn_map=default_collate_fn_map)
- ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
- File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/torch/utils/data/_utils/collate.py", line 214, in collate
- return [
- ^
- File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/torch/utils/data/_utils/collate.py", line 215, in <listcomp>
- collate(samples, collate_fn_map=collate_fn_map)
- File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/torch/utils/data/_utils/collate.py", line 155, in collate
- return collate_fn_map[elem_type](batch, collate_fn_map=collate_fn_map)
- ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
- File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/torch/utils/data/_utils/collate.py", line 275, in collate_tensor_fn
- return torch.stack(batch, 0, out=out)
- ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-RuntimeError: stack expects each tensor to be equal size, but got [8, 376, 626] at entry 0 and [8, 344, 361] at entry 1