# Minimal Audio Tracker in addition to being able to generate spectrograms (aka "Samples") on the fly and play them at once, we need a way to assemble samples (assets or generated) into modifiable patterns and loops. Like what the Trackers were doing in the old time. ## The idea A script can take a 'tracker music' text file that describes the sequence of samples (a 'pattern') in a first part. In a second part, these sequence a laid out with a timestamp (potentially overlapping) to generate the full music score. The patterns' samples (spectrograms) are not yet generated, it's just the 'musical score' here. We still need to 'play' the score but with modifiers applied. ### Modifiers For diversity, these essential patterns can be modified on the fly before being generated as spectrogram on the fly. Modifiers could be: * randomize (for drums, e.g.) * add accents and stresses * modulate volume * add distortion or noise * add 'grain' and static noise * flanger, vocoding, robotic voice, etc. * etc. These modifiers are applied to a predefined pattern just before it's generated for playback (or assembling in the final track). ### How would that work in practice? The musical score is a text file, that a tool will convert to run-time code to be compiled in the final binary demo64k. This generated code can be mixed with fixed code from the demo codebase itself (explosion predefined at a given time ,etc.) The baking is done at compile time, and the code will go in src/generated/