diff options
Diffstat (limited to 'doc')
| -rw-r--r-- | doc/AI_RULES.md | 19 | ||||
| -rw-r--r-- | doc/CNN_EFFECT.md | 85 | ||||
| -rw-r--r-- | doc/CNN_RGBD_GRAYSCALE_SUMMARY.md | 134 | ||||
| -rw-r--r-- | doc/COMPLETED.md | 16 | ||||
| -rw-r--r-- | doc/CONTRIBUTING.md | 15 | ||||
| -rw-r--r-- | doc/EFFECT_WORKFLOW.md | 228 | ||||
| -rw-r--r-- | doc/HOWTO.md | 24 | ||||
| -rw-r--r-- | doc/RECIPE.md | 4 |
8 files changed, 488 insertions, 37 deletions
diff --git a/doc/AI_RULES.md b/doc/AI_RULES.md index d18a0cc..1a4ee78 100644 --- a/doc/AI_RULES.md +++ b/doc/AI_RULES.md @@ -5,3 +5,22 @@ - Prefer small, reviewable commits - All `cmake --build` commands must use the `-j4` option for parallel building. - after a task, a 'big' final commit should contain a short handoff tag like "handoff(Gemini):..." if you're gemini-cli, or "handoff(Claude): ..." if you're claude-code. + +## Adding Visual Effects + +**IMPORTANT:** When adding new visual effects, follow the complete workflow in `doc/EFFECT_WORKFLOW.md`. + +**Required steps (must complete ALL):** +1. Create effect files (.h, .cc, .wgsl) +2. Add shader to `workspaces/main/assets.txt` +3. Add `.cc` to CMakeLists.txt GPU_SOURCES (BOTH sections: headless and normal) +4. Include header in `src/gpu/demo_effects.h` +5. Add to timeline with `EFFECT +` (priority modifier is REQUIRED) +6. Add to test list in `src/tests/gpu/test_demo_effects.cc` +7. Build and verify: `cmake --build build -j4 && cd build && ./test_demo_effects` + +**Common mistakes to avoid:** +- Missing priority modifier in timeline (`EFFECT` must be `EFFECT +`, `EFFECT =`, or `EFFECT -`) +- Adding `.cc` to only one CMakeLists.txt section (need BOTH headless and normal) +- Wrong asset ID (check assets.txt entry name → `ASSET_SHADER_<NAME>`) +- Forgetting to add to test file diff --git a/doc/CNN_EFFECT.md b/doc/CNN_EFFECT.md index ae0f38a..4659fd3 100644 --- a/doc/CNN_EFFECT.md +++ b/doc/CNN_EFFECT.md @@ -21,27 +21,46 @@ Trainable convolutional neural network layers for artistic stylization (painterl ## Architecture -### Coordinate-Aware Layer 0 +### RGBD → Grayscale Pipeline -Layer 0 accepts normalized (x,y) patch center coordinates alongside RGBA samples: +**Input:** RGBD (RGB + inverse depth D=1/z) +**Output:** Grayscale (1 channel) +**Layer Input:** 7 channels = [RGBD, UV coords, grayscale] all normalized to [-1,1] + +**Architecture:** +- **Inner layers (0..N-2):** Conv2d(7→4) - output RGBD +- **Final layer (N-1):** Conv2d(7→1) - output grayscale ```wgsl -fn cnn_conv3x3_with_coord( +// Inner layers: 7→4 (RGBD output) +fn cnn_conv3x3_7to4( tex: texture_2d<f32>, samp: sampler, - uv: vec2<f32>, # Center position [0,1] + uv: vec2<f32>, resolution: vec2<f32>, - rgba_weights: array<mat4x4<f32>, 9>, # 9 samples × 4×4 matrix - coord_weights: mat2x4<f32>, # 2 coords → 4 outputs - bias: vec4<f32> + original: vec4<f32>, # Original RGBD [-1,1] + weights: array<array<f32, 8>, 36> # 9 pos × 4 out × (7 weights + bias) ) -> vec4<f32> -``` -**Input structure:** 9 RGBA samples (36 values) + 1 xy coordinate (2 values) = 38 inputs → 4 outputs +// Final layer: 7→1 (grayscale output) +fn cnn_conv3x3_7to1( + tex: texture_2d<f32>, + samp: sampler, + uv: vec2<f32>, + resolution: vec2<f32>, + original: vec4<f32>, + weights: array<array<f32, 8>, 9> # 9 pos × (7 weights + bias) +) -> f32 +``` -**Size impact:** +32B coord weights, kernel-agnostic +**Input normalization:** +- **fs_main** normalizes textures once: `(tex - 0.5) * 2` → [-1,1] +- **Conv functions** normalize UV coords: `(uv - 0.5) * 2` → [-1,1] +- **Grayscale** computed from normalized RGBD: `0.2126*R + 0.7152*G + 0.0722*B` +- **Inter-layer data** stays in [-1,1] (no denormalization) +- **Final output** denormalized for display: `(result + 1.0) * 0.5` → [0,1] -**Use cases:** Position-dependent stylization (vignettes, corner darkening, radial gradients) +**Activation:** tanh for inner layers (output stays [-1,1]), none for final layer ### Multi-Layer Architecture @@ -80,18 +99,15 @@ workspaces/main/shaders/cnn/ ### 1. Prepare Training Data Collect input/target image pairs: -- **Input:** Raw 3D render -- **Target:** Artistic style (hand-painted, filtered, stylized) +- **Input:** RGBA (RGB + depth as alpha channel, D=1/z) +- **Target:** Grayscale stylized output ```bash -training/input/img_000.png # Raw render -training/output/img_000.png # Stylized target +training/input/img_000.png # RGBA render (RGB + depth) +training/output/img_000.png # Grayscale target ``` -Use `image_style_processor.py` to generate targets: -```bash -python3 training/image_style_processor.py input/ output/ pencil_sketch -``` +**Note:** Input images must be RGBA where alpha = inverse depth (1/z) ### 2. Train Network @@ -135,6 +151,14 @@ python3 training/train_cnn.py \ --output workspaces/main/shaders/cnn/cnn_weights_generated.wgsl ``` +**Generate ground truth (for shader validation):** +```bash +python3 training/train_cnn.py \ + --infer training/input/img_000.png \ + --export-only training/checkpoints/checkpoint_epoch_200.pth \ + --output training/ground_truth.png +``` + ### 3. Rebuild Demo Training script auto-generates both `cnn_weights_generated.wgsl` and `cnn_layer.wgsl`: @@ -245,20 +269,25 @@ Expands to: **Weight Storage:** -**Layer 0 (coordinate-aware):** +**Inner layers (7→4 RGBD output):** ```wgsl -const rgba_weights_layer0: array<mat4x4<f32>, 9> = array(...); -const coord_weights_layer0 = mat2x4<f32>( - 0.1, -0.2, 0.0, 0.0, # x-coord weights - -0.1, 0.0, 0.2, 0.0 # y-coord weights +// Structure: array<array<f32, 8>, 36> +// 9 positions × 4 output channels, each with 7 weights + bias +const weights_layer0: array<array<f32, 8>, 36> = array( + array<f32, 8>(w0_r, w0_g, w0_b, w0_d, w0_u, w0_v, w0_gray, bias0), // pos0_ch0 + array<f32, 8>(w1_r, w1_g, w1_b, w1_d, w1_u, w1_v, w1_gray, bias1), // pos0_ch1 + // ... 34 more entries ); -const bias_layer0 = vec4<f32>(0.0, 0.0, 0.0, 0.0); ``` -**Layers 1+ (standard):** +**Final layer (7→1 grayscale output):** ```wgsl -const weights_layer1: array<mat4x4<f32>, 9> = array(...); -const bias_layer1 = vec4<f32>(0.0, 0.0, 0.0, 0.0); +// Structure: array<array<f32, 8>, 9> +// 9 positions, each with 7 weights + bias +const weights_layerN: array<array<f32, 8>, 9> = array( + array<f32, 8>(w0_r, w0_g, w0_b, w0_d, w0_u, w0_v, w0_gray, bias0), // pos0 + // ... 8 more entries +); ``` --- diff --git a/doc/CNN_RGBD_GRAYSCALE_SUMMARY.md b/doc/CNN_RGBD_GRAYSCALE_SUMMARY.md new file mode 100644 index 0000000..4c13693 --- /dev/null +++ b/doc/CNN_RGBD_GRAYSCALE_SUMMARY.md @@ -0,0 +1,134 @@ +# CNN RGBD→Grayscale Architecture Implementation + +## Summary + +Implemented CNN architecture upgrade: RGBD input → grayscale output with 7-channel augmented input. + +## Changes Made + +### Architecture + +**Input:** RGBD (4 channels: RGB + inverse depth D=1/z) +**Output:** Grayscale (1 channel) +**Layer Input:** 7 channels = [RGBD, UV coords, grayscale] all normalized to [-1,1] + +**Layer Configuration:** +- Inner layers (0..N-2): Conv2d(7→4) - output RGBD with tanh activation +- Final layer (N-1): Conv2d(7→1) - output grayscale, no activation + +### Input Normalization (all to [-1,1]) + +- **RGBD:** `(rgbd - 0.5) * 2` +- **UV coords:** `(uv - 0.5) * 2` +- **Grayscale:** `(0.2126*R + 0.7152*G + 0.0722*B - 0.5) * 2` + +**Rationale:** Zero-centered inputs for tanh activation, better gradient flow. + +### Modified Files + +**Training (`/Users/skal/demo/training/train_cnn.py`):** +1. Removed `CoordConv2d` class +2. Updated `SimpleCNN`: + - Inner layers: `Conv2d(7, 4)` - RGBD output + - Final layer: `Conv2d(7, 1)` - grayscale output +3. Updated `forward()`: + - Normalize RGBD/coords/gray to [-1,1] + - Concatenate 7-channel input for each layer + - Apply tanh (inner) or none (final) + - Denormalize final output +4. Updated `export_weights_to_wgsl()`: + - Inner: `array<array<f32, 8>, 36>` (9 pos × 4 ch × 8 values) + - Final: `array<array<f32, 8>, 9>` (9 pos × 8 values) +5. Updated `generate_layer_shader()`: + - Use `cnn_conv3x3_7to4` for inner layers + - Use `cnn_conv3x3_7to1` for final layer + - Denormalize outputs from [-1,1] to [0,1] +6. Updated `ImagePairDataset`: + - Load RGBA input (was RGB) + +**Shaders (`/Users/skal/demo/workspaces/main/shaders/cnn/cnn_conv3x3.wgsl`):** +1. Added `cnn_conv3x3_7to4()`: + - 7-channel input: [RGBD, uv_x, uv_y, gray] + - 4-channel output: RGBD + - Weights: `array<array<f32, 8>, 36>` +2. Added `cnn_conv3x3_7to1()`: + - 7-channel input: [RGBD, uv_x, uv_y, gray] + - 1-channel output: grayscale + - Weights: `array<array<f32, 8>, 9>` + +**Documentation (`/Users/skal/demo/doc/CNN_EFFECT.md`):** +1. Updated architecture section with RGBD→grayscale pipeline +2. Updated training data requirements (RGBA input) +3. Updated weight storage format + +### No C++ Changes + +CNNLayerParams and bind groups remain unchanged. + +## Data Flow + +1. Layer 0 captures original RGBD to `captured_frame` +2. Each layer: + - Samples previous layer output (RGBD in [0,1]) + - Normalizes RGBD to [-1,1] + - Computes UV coords and grayscale, normalizes to [-1,1] + - Concatenates 7-channel input + - Applies convolution with layer-specific weights + - Outputs RGBD (inner) or grayscale (final) in [-1,1] + - Applies tanh (inner only) + - Denormalizes to [0,1] for texture storage + - Blends with original + +## Next Steps + +1. **Prepare RGBD training data:** + - Input: RGBA images (RGB + depth in alpha) + - Target: Grayscale stylized output + +2. **Train network:** + ```bash + python3 training/train_cnn.py \ + --input training/input \ + --target training/output \ + --layers 3 \ + --epochs 1000 + ``` + +3. **Verify generated shaders:** + - Check `cnn_weights_generated.wgsl` structure + - Check `cnn_layer.wgsl` uses new conv functions + +4. **Test in demo:** + ```bash + cmake --build build -j4 + ./build/demo64k + ``` + +## Design Rationale + +**Why [-1,1] normalization?** +- Centered inputs for tanh (operates best around 0) +- Better gradient flow +- Standard ML practice for normalized data + +**Why RGBD throughout vs RGB?** +- Depth information propagates through network +- Enables depth-aware stylization +- Consistent 4-channel processing + +**Why 7-channel input?** +- Coordinates: position-dependent effects (vignettes) +- Grayscale: luminance-aware processing +- RGBD: full color+depth information +- Enables richer feature learning + +## Testing Checklist + +- [ ] Train network with RGBD input data +- [ ] Verify `cnn_weights_generated.wgsl` structure +- [ ] Verify `cnn_layer.wgsl` uses `7to4`/`7to1` functions +- [ ] Build demo without errors +- [ ] Visual test: inner layers show RGBD evolution +- [ ] Visual test: final layer produces grayscale +- [ ] Visual test: blending works correctly +- [ ] Compare quality with previous RGB→RGB architecture diff --git a/doc/COMPLETED.md b/doc/COMPLETED.md index d1c89af..2336f62 100644 --- a/doc/COMPLETED.md +++ b/doc/COMPLETED.md @@ -29,6 +29,22 @@ Detailed historical documents have been moved to `doc/archive/` for reference: Use `read @doc/archive/FILENAME.md` to access archived documents. +## Recently Completed (February 10, 2026) + +- [x] **WGPU Boilerplate Factorization** + - **Goal**: Reduce repetitive WGPU code via builder pattern helpers + - **Implementation**: + - Created `BindGroupLayoutBuilder` and `BindGroupBuilder` for declarative bind group creation + - Created `RenderPipelineBuilder` to simplify pipeline setup with ShaderComposer integration + - Created `SamplerCache` singleton to deduplicate sampler instances + - Refactored `post_process_helper.cc`, `cnn_effect.cc`, `rotating_cube_effect.cc` + - **Result**: + - Bind group creation: 19 instances reduced from 14→4 lines each + - Pipeline creation: 30-50 lines reduced to 8 lines + - Sampler deduplication: 6 instances → cached + - Total: -122 lines boilerplate, binary size unchanged (6.3M debug) + - Tests pass, prevents binding index errors + ## Recently Completed (February 9, 2026) - [x] **External Library Size Measurement (Task #76)** diff --git a/doc/CONTRIBUTING.md b/doc/CONTRIBUTING.md index 9cd785b..98df873 100644 --- a/doc/CONTRIBUTING.md +++ b/doc/CONTRIBUTING.md @@ -65,12 +65,15 @@ See `doc/CODING_STYLE.md` for detailed examples. ## Development Protocols ### Adding Visual Effect -1. Implement `Effect` subclass in `src/gpu/demo_effects.cc` -2. Add to workspace `timeline.seq` (e.g., `workspaces/main/timeline.seq`) -3. **Update `test_demo_effects.cc`**: - - Add to test list - - Increment `EXPECTED_*_COUNT` -4. Verify: +1. Create effect class files (use `tools/shadertoy/convert_shadertoy.py` or templates) +2. Add shader to `workspaces/main/assets.txt` +3. Add effect `.cc` file to `CMakeLists.txt` GPU_SOURCES (both sections) +4. Include header in `src/gpu/demo_effects.h` +5. Add to workspace `timeline.seq` (e.g., `workspaces/main/timeline.seq`) +6. **Update `src/tests/gpu/test_demo_effects.cc`**: + - Add to `post_process_effects` list (lines 80-93) or `scene_effects` list (lines 125-137) + - Example: `{"MyEffect", std::make_shared<MyEffect>(fixture.ctx())},` +7. Verify: ```bash cmake -S . -B build -DDEMO_BUILD_TESTS=ON cmake --build build -j4 --target test_demo_effects diff --git a/doc/EFFECT_WORKFLOW.md b/doc/EFFECT_WORKFLOW.md new file mode 100644 index 0000000..45c47b7 --- /dev/null +++ b/doc/EFFECT_WORKFLOW.md @@ -0,0 +1,228 @@ +# Effect Creation Workflow + +**Target Audience:** AI coding agents and developers + +Automated checklist for adding new visual effects to the demo. + +--- + +## Quick Reference + +**For ShaderToy conversions:** Use `tools/shadertoy/convert_shadertoy.py` then follow steps 3-8 below. + +**For custom effects:** Follow all steps 1-8. + +--- + +## Step-by-Step Workflow + +### 1. Create Effect Files + +**Location:** +- Header: `src/gpu/effects/<effect_name>_effect.h` +- Implementation: `src/gpu/effects/<effect_name>_effect.cc` +- Shader: `workspaces/main/shaders/<effect_name>.wgsl` + +**Naming Convention:** +- Class name: `<EffectName>Effect` (e.g., `TunnelEffect`, `PlasmaEffect`) +- Files: `<effect_name>_effect.*` (snake_case) + +**Base Class:** +- Post-process effects: inherit from `PostProcessEffect` +- Scene effects: inherit from `Effect` + +**Template:** See `tools/shadertoy/template.*` or use `convert_shadertoy.py` + +### 2. Add Shader to Assets + +**File:** `workspaces/main/assets.txt` + +**Format:** +``` +SHADER_<UPPER_SNAKE_NAME>, NONE, shaders/<effect_name>.wgsl, "Effect description" +``` + +**Example:** +``` +SHADER_TUNNEL, NONE, shaders/tunnel.wgsl, "Tunnel effect shader" +``` + +**Asset ID:** Will be `AssetId::ASSET_SHADER_<UPPER_SNAKE_NAME>` in C++ + +### 3. Add to CMakeLists.txt + +**File:** `CMakeLists.txt` + +**Action:** Add `src/gpu/effects/<effect_name>_effect.cc` to **BOTH** GPU_SOURCES sections: +- Headless mode section (around line 141-167) +- Normal mode section (around line 171-197) + +**Location:** After similar effects (post-process with post-process, scene with scene) + +**Example:** +```cmake +# In headless section (line ~152): + src/gpu/effects/solarize_effect.cc + src/gpu/effects/tunnel_effect.cc # <-- Add here + src/gpu/effects/chroma_aberration_effect.cc + +# In normal section (line ~183): + src/gpu/effects/solarize_effect.cc + src/gpu/effects/tunnel_effect.cc # <-- Add here + src/gpu/effects/chroma_aberration_effect.cc +``` + +### 4. Include in demo_effects.h + +**File:** `src/gpu/demo_effects.h` + +**Action:** Add include directive: +```cpp +#include "gpu/effects/<effect_name>_effect.h" +``` + +**Location:** Alphabetically with other effect includes + +### 5. Add to Timeline + +**File:** `workspaces/main/timeline.seq` + +**Format:** +``` +SEQUENCE <start_time> <priority> + EFFECT <+|=|-> <EffectName>Effect <local_start> <local_end> [params...] +``` + +**Priority Modifiers (REQUIRED):** +- `+` : Increment priority +- `=` : Same priority as previous effect +- `-` : Decrement priority (for backgrounds) + +**Example:** +``` +SEQUENCE 0.0 0 + EFFECT + TunnelEffect 0.0 10.0 +``` + +**Common Mistake:** Missing priority modifier (`+`, `=`, `-`) after EFFECT keyword + +### 6. Update Tests + +**File:** `src/tests/gpu/test_demo_effects.cc` + +**Action:** Add effect to appropriate list: + +**Post-Process Effects (lines 80-93):** +```cpp +{"TunnelEffect", std::make_shared<TunnelEffect>(fixture.ctx())}, +``` + +**Scene Effects (lines 125-137):** +```cpp +{"TunnelEffect", std::make_shared<TunnelEffect>(fixture.ctx())}, +``` + +**3D Effects:** If requires Renderer3D, add to `requires_3d` check (line 148-151) + +### 7. Build and Test + +```bash +# Full build +cmake --build build -j4 + +# Run effect tests +cmake -S . -B build -DDEMO_BUILD_TESTS=ON +cmake --build build -j4 --target test_demo_effects +cd build && ./test_demo_effects + +# Run all tests +cd build && ctest +``` + +### 8. Verify + +**Checklist:** +- [ ] Effect compiles without errors +- [ ] Effect appears in timeline +- [ ] test_demo_effects passes +- [ ] Effect renders correctly: `./build/demo64k` +- [ ] No shader compilation errors +- [ ] Follows naming conventions + +--- + +## Common Issues + +### Build Error: "no member named 'ASSET_..._SHADER'" + +**Cause:** Shader not in assets.txt or wrong asset ID name + +**Fix:** +1. Check `workspaces/main/assets.txt` has shader entry +2. Asset ID is `ASSET_` + uppercase entry name (e.g., `SHADER_TUNNEL` → `ASSET_SHADER_TUNNEL`) + +### Build Error: "undefined symbol for architecture" + +**Cause:** Effect not in CMakeLists.txt GPU_SOURCES + +**Fix:** Add `.cc` file to BOTH sections (headless and normal mode) + +### Timeline Parse Error: "Expected '+', '=', or '-'" + +**Cause:** Missing priority modifier after EFFECT keyword + +**Fix:** Use `EFFECT +`, `EFFECT =`, or `EFFECT -` (never just `EFFECT`) + +### Test Failure: Effect not in test list + +**Cause:** Effect not added to test_demo_effects.cc + +**Fix:** Add to `post_process_effects` or `scene_effects` list + +--- + +## Automation Script Example + +```bash +#!/bin/bash +# Example automation for AI agents + +EFFECT_NAME="$1" # CamelCase (e.g., "Tunnel") +SNAKE_NAME=$(echo "$EFFECT_NAME" | sed 's/\([A-Z]\)/_\L\1/g' | sed 's/^_//') +UPPER_NAME=$(echo "$SNAKE_NAME" | tr '[:lower:]' '[:upper:]') + +echo "Creating effect: $EFFECT_NAME" +echo " Snake case: $SNAKE_NAME" +echo " Upper case: $UPPER_NAME" + +# 1. Generate files (if using ShaderToy) +# ./tools/shadertoy/convert_shadertoy.py shader.txt "$EFFECT_NAME" + +# 2. Add to assets.txt +echo "SHADER_${UPPER_NAME}, NONE, shaders/${SNAKE_NAME}.wgsl, \"${EFFECT_NAME} effect\"" \ + >> workspaces/main/assets.txt + +# 3. Add to CMakeLists.txt (both sections) +# Use Edit tool to add to both GPU_SOURCES sections + +# 4. Add include to demo_effects.h +# Use Edit tool to add #include line + +# 5. Add to timeline.seq +# Use Edit tool to add EFFECT line with priority modifier + +# 6. Add to test file +# Use Edit tool to add to appropriate test list + +# 7. Build +cmake --build build -j4 +``` + +--- + +## See Also + +- `tools/shadertoy/README.md` - ShaderToy conversion guide +- `doc/SEQUENCE.md` - Timeline format documentation +- `doc/CONTRIBUTING.md` - General contribution guidelines +- `src/gpu/effects/` - Existing effect examples diff --git a/doc/HOWTO.md b/doc/HOWTO.md index bdc0214..5ea6afd 100644 --- a/doc/HOWTO.md +++ b/doc/HOWTO.md @@ -86,12 +86,34 @@ make run_util_tests # Utility tests --- +## Training + +```bash +./training/train_cnn.py --layers 3 --kernel_sizes 3,5,3 --epochs 10000 --batch_size 8 --input training/input/ --target training/output/ --checkpoint-every 1000 +``` + +Generate shaders from checkpoint: +```bash +./training/train_cnn.py --export-only training/checkpoints/checkpoint_epoch_7000.pth +``` + +Generate ground truth (for shader validation): +```bash +./training/train_cnn.py --infer input.png --export-only checkpoints/checkpoint_epoch_7000.pth --output ground_truth.png +``` + +**Note:** Kernel sizes must match shader functions: +- 3×3 kernel → `cnn_conv3x3_7to4` (36 weights: 9 pos × 4 channels) +- 5×5 kernel → `cnn_conv5x5_7to4` (100 weights: 25 pos × 4 channels) + +--- + ## Timeline Edit `workspaces/main/timeline.seq`: ```text SEQUENCE 0.0 0 - EFFECT HeptagonEffect 0.0 60.0 0 + EFFECT + HeptagonEffect 0.0 60.0 0 ``` Rebuild to apply. See `doc/SEQUENCE.md`. diff --git a/doc/RECIPE.md b/doc/RECIPE.md index 6404391..d563027 100644 --- a/doc/RECIPE.md +++ b/doc/RECIPE.md @@ -157,8 +157,8 @@ void MyEffect::render(WGPUTextureView prev, WGPUTextureView target, **.seq syntax:** ``` -EFFECT MyEffect 0.0 10.0 strength=0.5 speed=3.0 -EFFECT MyEffect 10.0 20.0 strength=2.0 # speed keeps previous value +EFFECT + MyEffect 0.0 10.0 strength=0.5 speed=3.0 +EFFECT = MyEffect 10.0 20.0 strength=2.0 # speed keeps previous value ``` **Example:** `src/gpu/effects/flash_effect.cc`, `src/gpu/effects/chroma_aberration_effect.cc` |
