blob: cb14755324f0d9b12eb31c251b5863dd2e74efc6 (
plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
|
# Minimal Audio Tracker
in addition to being able to generate spectrograms (aka "Samples") on the
fly and play them at once, we need a way to assemble samples (assets or
generated) into modifiable patterns and loops. Like what the Trackers were
doing in the old time.
## The idea
A script can take a 'tracker music' text file that describes the sequence
of samples (a 'pattern') in a first part. In a second part, these sequence
a laid out with a timestamp (potentially overlapping) to generate the full
music score.
The patterns' samples (spectrograms) are not yet generated, it's just the
'musical score' here. We still need to 'play' the score but with modifiers
applied.
### Modifiers
For diversity, these essential patterns can be modified on the fly before
being generated as spectrogram on the fly.
Modifiers could be:
* randomize (for drums, e.g.)
* add accents and stresses
* modulate volume
* add distortion or noise
* add 'grain' and static noise
* flanger, vocoding, robotic voice, etc.
* etc.
These modifiers are applied to a predefined pattern just before it's
generated for playback (or assembling in the final track).
### How would that work in practice?
The musical score is a text file, that a tool will convert to run-time
code to be compiled in the final binary demo64k.
This generated code can be mixed with fixed code from the demo codebase
itself (explosion predefined at a given time ,etc.)
The baking is done at compile time, and the code will go in src/generated/
|