diff options
| author | skal <pascal.massimino@gmail.com> | 2026-02-02 21:43:20 +0100 |
|---|---|---|
| committer | skal <pascal.massimino@gmail.com> | 2026-02-02 21:43:20 +0100 |
| commit | 4fc02a8d2acf1eafce36c1348261890d54b8b5b5 (patch) | |
| tree | 2a02b601c4407a12cafa3b15ff6eb36759d0e302 | |
| parent | 8be1646fb537e0764a91c36b9cfc45ba62cbb071 (diff) | |
add a TRACKER idea to the project
| -rw-r--r-- | GEMINI.md | 1 | ||||
| -rw-r--r-- | PROJECT_CONTEXT.md | 2 | ||||
| -rw-r--r-- | doc/TRACKER.md | 43 |
3 files changed, 46 insertions, 0 deletions
@@ -8,6 +8,7 @@ @3D.md @TODO.md @SPEC_EDITOR.md +@TRACKER.md @PROCEDURAL.md @src/util/asset_manager.h @tools/asset_packer.cc diff --git a/PROJECT_CONTEXT.md b/PROJECT_CONTEXT.md index e356fe8..a938801 100644 --- a/PROJECT_CONTEXT.md +++ b/PROJECT_CONTEXT.md @@ -13,6 +13,7 @@ Audio: - 32 kHz, 16-bit mono - Procedurally generated samples - Real-time additive synthesis from spectrograms (IDCT) +- Modifiable Loops and Patterns, w/ script to generate them (like a Tracker) Constraints: - Size-sensitive @@ -85,3 +86,4 @@ Style: - **Synthesis**: Real-time additive synthesis from spectrograms via IDCT. - **Dynamic Updates**: Double-buffered spectrograms for live thread-safe updates. - **Procedural Library**: Melodies and spectral filters (noise, comb) generated at runtime. +- **Pattern and loop**: spectrograms grouped as pattern and loops, and modifiers can be applied to loops (randomize, accents, etc.) diff --git a/doc/TRACKER.md b/doc/TRACKER.md new file mode 100644 index 0000000..cb14755 --- /dev/null +++ b/doc/TRACKER.md @@ -0,0 +1,43 @@ +# Minimal Audio Tracker + +in addition to being able to generate spectrograms (aka "Samples") on the +fly and play them at once, we need a way to assemble samples (assets or +generated) into modifiable patterns and loops. Like what the Trackers were +doing in the old time. + +## The idea + +A script can take a 'tracker music' text file that describes the sequence +of samples (a 'pattern') in a first part. In a second part, these sequence +a laid out with a timestamp (potentially overlapping) to generate the full +music score. + +The patterns' samples (spectrograms) are not yet generated, it's just the +'musical score' here. We still need to 'play' the score but with modifiers +applied. + +### Modifiers + +For diversity, these essential patterns can be modified on the fly before +being generated as spectrogram on the fly. +Modifiers could be: + * randomize (for drums, e.g.) + * add accents and stresses + * modulate volume + * add distortion or noise + * add 'grain' and static noise + * flanger, vocoding, robotic voice, etc. + * etc. + +These modifiers are applied to a predefined pattern just before it's +generated for playback (or assembling in the final track). + +### How would that work in practice? + +The musical score is a text file, that a tool will convert to run-time +code to be compiled in the final binary demo64k. +This generated code can be mixed with fixed code from the demo codebase +itself (explosion predefined at a given time ,etc.) +The baking is done at compile time, and the code will go in src/generated/ + + |
