summaryrefslogtreecommitdiff
path: root/cnn_v3/docs/HOWTO.md
diff options
context:
space:
mode:
Diffstat (limited to 'cnn_v3/docs/HOWTO.md')
-rw-r--r--cnn_v3/docs/HOWTO.md139
1 files changed, 137 insertions, 2 deletions
diff --git a/cnn_v3/docs/HOWTO.md b/cnn_v3/docs/HOWTO.md
index 983e8b7..08979e7 100644
--- a/cnn_v3/docs/HOWTO.md
+++ b/cnn_v3/docs/HOWTO.md
@@ -259,6 +259,8 @@ Test vectors generated by `cnn_v3/training/gen_test_vectors.py` (PyTorch referen
| 4 — C++ CNNv3Effect | ✅ Done | FiLM uniform upload, 36/36 tests pass |
| 5 — Parity validation | ✅ Done | test_cnn_v3_parity.cc, max_err=4.88e-4 |
| 6 — FiLM MLP training | ✅ Done | train_cnn_v3.py + cnn_v3_utils.py written |
+| 7 — G-buffer visualizer (C++) | ✅ Done | GBufViewEffect, 36/36 tests pass |
+| 7 — Sample loader (web tool) | ✅ Done | "Load sample directory" in cnn_v3/tools/ |
---
@@ -337,9 +339,142 @@ auto src = ShaderComposer::Get().Compose({"cnn_v3/common"}, raw_wgsl);
---
-## 9. See Also
+## 9. Validation Workflow
+
+Two complementary tools let you verify each stage of the pipeline before training
+or integrating into the demo.
+
+### 9a. C++ — GBufViewEffect (G-buffer channel grid)
+
+`GBufViewEffect` renders all 20 feature channels from `feat_tex0` / `feat_tex1`
+in a **4×5 tiled grid** so you can see the G-buffer at a glance.
+
+**Registration (already done)**
+
+| File | What changed |
+|------|-------------|
+| `cnn_v3/shaders/gbuf_view.wgsl` | New fragment shader |
+| `cnn_v3/src/gbuf_view_effect.h` | Effect class declaration |
+| `cnn_v3/src/gbuf_view_effect.cc` | Effect class implementation |
+| `workspaces/main/assets.txt` | `SHADER_GBUF_VIEW` asset |
+| `cmake/DemoSourceLists.cmake` | `gbuf_view_effect.cc` in COMMON_GPU_EFFECTS |
+| `src/gpu/demo_effects.h` | `#include "../../cnn_v3/src/gbuf_view_effect.h"` |
+| `src/effects/shaders.h/.cc` | `gbuf_view_wgsl` extern declaration + definition |
+| `src/tests/gpu/test_demo_effects.cc` | GBufViewEffect test |
+
+**Constructor signature**
+
+```cpp
+GBufViewEffect(const GpuContext& ctx,
+ const std::vector<std::string>& inputs, // {feat_tex0, feat_tex1}
+ const std::vector<std::string>& outputs, // {gbuf_view_out}
+ float start_time, float end_time)
+```
+
+**Wiring example** (alongside GBufferEffect):
+
+```cpp
+auto gbuf = std::make_shared<GBufferEffect>(ctx,
+ std::vector<std::string>{"prev_cnn"},
+ std::vector<std::string>{"gbuf_feat0", "gbuf_feat1"}, 0.0f, 60.0f);
+auto gview = std::make_shared<GBufViewEffect>(ctx,
+ std::vector<std::string>{"gbuf_feat0", "gbuf_feat1"},
+ std::vector<std::string>{"gbuf_view_out"}, 0.0f, 60.0f);
+```
+
+**Grid layout** (output resolution = input resolution, channel cells each 1/4 W × 1/5 H):
+
+| Row | Col 0 | Col 1 | Col 2 | Col 3 |
+|-----|-------|-------|-------|-------|
+| 0 | `alb.r` (red tint) | `alb.g` (green tint) | `alb.b` (blue tint) | `nrm.x` remap→[0,1] |
+| 1 | `nrm.y` remap→[0,1] | `depth` (inverted) | `dzdx` ×20+0.5 | `dzdy` ×20+0.5 |
+| 2 | `mat_id` | `prev.r` | `prev.g` | `prev.b` |
+| 3 | `mip1.r` | `mip1.g` | `mip1.b` | `mip2.r` |
+| 4 | `mip2.g` | `mip2.b` | `shadow` | `transp` |
+
+1-pixel gray grid lines separate cells. Dark background for out-of-range cells.
+
+**Shader binding layout** (no sampler needed — integer texture):
+
+| Binding | Type | Content |
+|---------|------|---------|
+| 0 | `texture_2d<u32>` | `feat_tex0` (8 f16 channels via `pack2x16float`) |
+| 1 | `texture_2d<u32>` | `feat_tex1` (12 u8 channels via `pack4x8unorm`) |
+| 2 | `uniform` (8 B) | `GBufViewUniforms { resolution: vec2f }` |
+
+The BGL is built manually in the constructor (no sampler) — this is an exception to the
+standard post-process pattern because `rgba32uint` textures use `WGPUTextureSampleType_Uint`
+and cannot be sampled, only loaded via `textureLoad()`.
+
+**Implementation note — bind group recreation**
+
+`render()` calls `wgpuRenderPipelineGetBindGroupLayout(pipeline_, 0)` each frame to
+extract the BGL, creates a new `BindGroup`, then immediately releases the BGL handle.
+This avoids storing a raw BGL as a member (no RAII wrapper exists for it) while
+remaining correct across ping-pong buffer swaps.
+
+---
+
+### 9b. Web tool — "Load sample directory"
+
+`cnn_v3/tools/index.html` has a **"Load sample directory"** button that:
+1. Opens a `webkitdirectory` picker to select a sample folder
+2. Loads all G-buffer component PNGs as `rgba8unorm` GPU textures
+3. Runs the `FULL_PACK_SHADER` compute shader to assemble `feat_tex0` / `feat_tex1`
+4. Runs full CNN inference (enc0 → enc1 → bottleneck → dec1 → dec0)
+5. Displays the CNN output on the main canvas
+6. If `target.png` is present, shows it side-by-side and prints PSNR
+
+**File name matching** (case-insensitive, substring):
+
+| Channel | Matched patterns | Fallback |
+|---------|-----------------|---------|
+| Albedo (required) | `albedo`, `color` | — (error if missing) |
+| Normal | `normal`, `nrm` | `rgb(128,128,0,255)` — flat (0,0) oct-encoded |
+| Depth | `depth` | `0` — zero depth |
+| Mat ID | `matid`, `index`, `mat_id` | `0` — no material |
+| Shadow | `shadow` | `255` — fully lit |
+| Transparency | `transp`, `alpha` | `0` — fully opaque |
+| Target | `target`, `output`, `ground_truth` | not shown |
+
+**`FULL_PACK_SHADER`** (defined in `cnn_v3/tools/shaders.js`)
+
+WebGPU compute shader (`@workgroup_size(8,8)`) with 9 bindings:
+
+| Binding | Resource | Format |
+|---------|----------|--------|
+| 0–5 | albedo, normal, depth, matid, shadow, transp | `texture_2d<f32>` (rgba8unorm, R channel for single-channel maps) |
+| 6 | feat_tex0 output | `texture_storage_2d<rgba32uint,write>` |
+| 7 | feat_tex1 output | `texture_storage_2d<rgba32uint,write>` |
+
+No sampler — all reads use `textureLoad()` (integer texel coordinates).
+
+Packs channels identically to `gbuf_pack.wgsl`:
+- `feat_tex0`: `pack2x16float(alb.rg)`, `pack2x16float(alb.b, nrm.x)`, `pack2x16float(nrm.y, depth)`, `pack2x16float(dzdx, dzdy)`
+- `feat_tex1`: `pack4x8unorm(matid,0,0,0)`, `pack4x8unorm(mip1.rgb, mip2.r)`, `pack4x8unorm(mip2.gb, shadow, transp)`
+- Depth gradients: central differences on depth R channel
+- Mip1 / Mip2: box2 (2×2) / box4 (4×4) average filter on albedo
+
+**PSNR computation** (`computePSNR`)
+
+- CNN output (`rgba16float`) copied to CPU staging buffer via `copyTextureToBuffer`
+- f16→float32 decoded in JavaScript
+- Target drawn to offscreen `<canvas>` via `drawImage`, pixels read with `getImageData`
+- MSE and PSNR computed over all RGB pixels (alpha ignored)
+- Result displayed below target canvas as `MSE=X.XXXXX PSNR=XX.XXdB`
+
+**`runFromFeat(f0, f1, w, h)`**
+
+Called by `loadSampleDir()` after packing, or can be called directly if feat textures
+are already available. Skips the photo-pack step, runs all 5 CNN passes, and displays
+the result. Intermediate textures are stored in `this.layerTextures` so the Layer
+Visualization panel still works.
+
+---
+
+## 10. See Also
- `cnn_v3/docs/CNN_V3.md` — Full architecture design (U-Net, FiLM, feature layout)
- `doc/EFFECT_WORKFLOW.md` — General effect integration guide
- `cnn_v2/docs/CNN_V2.md` — Reference implementation (simpler, operational)
-- `src/tests/gpu/test_demo_effects.cc` — GBufferEffect construction test
+- `src/tests/gpu/test_demo_effects.cc` — GBufferEffect + GBufViewEffect tests