1. Audio Resynthesis
A simple demonstration of granular quasi-resynthesis via partial tracks, where an sound is used to guide the behavior of sinusoidal grains.
A simple demonstration of granular quasi-resynthesis via partial tracks, where an sound is used to guide the behavior of sinusoidal grains.
In bellplay~, the basic building block of our scripts is the buffer.
One of the core features of bellplay~ is the ability to dynamically and flexibly apply chains of DSP algorithms to our buffers.
This example demonstrates how to align the envelopes of different audio samples based on their peak amplitude times.
This example automatically generates a custom keymap given a list of audio file paths:
Sometimes it can be more useful or desirable to apply processing to the entire output, instead of processing each buffer individually.
Although the bellplay~ graphical user-interface (GUI) allows us to export the final output of each script, it's sometimes useful being able to programmatically do the same through our scripts.
This tutorial shows how to use buffers to control audio parameters.
An essential part of writing code is being able to debug unwanted or unexpected behaviors.
One of the core features in bellplay~ is our ability to analyze buffers to extract relevant information.
When analyzing buffers, we can specify the output format for many of the available audio descriptors.
This tutorial provides an additional example for using buffer analysis features for audio processing.
This tutorial demonstrates how to create an in-memory, queryable corpus of audio buffers by leveraging the createdbtable and querydb functions.
In bellplay~, computation-heavy operations such as building large corpora, analyzing lots of audio data, and more, can be take a very long time, thus making it more tedious to experiment with our scripts every time we run them.
An example of basic audio granulation, where short audio.
When writing code, it's good practice to use descriptive variable names. For instance, noise to represent a noise signal, or saw for a sawtooth wave.
This tutorial demonstrates a very simple but consequential feature in bellplay~ — namely, the ability to reuse rendered buffers multiple times to further refine and sculpt the final output into complex and intricate sounds.
In bellplay~, the ezsampler function provides a minimal but flexible interface for mapping symbolic pitch and velocity information to audio buffers.
bellplay~ supports importing MIDI files (.mid or .midi) into our scripts, each described as a list of events.
This tutorial shows how to build k-dimensional trees to efficiently perform feature-based search on buffers.
Depending on what you're doing, you might want to have more control over what the score transcription looks like.
This code demonstrates a feedback-based synthesis technique, where buffers are routed back into their own processing chain to create a rich, evolving drone.
In bellplay~ we can also generate buffers by importing our own audio files into our scripts.
When transcribing buffers, we get to specify essential information about how each buffer fits within the final output.
A basic example of waveshaping in bellplay~, using a randomly generated breakpoint function.
An example of using time-varying resampling to generate a polyphonic texture.
Similar to the transcription stage, the rendering stage allows us to define important aspects about the final output.
As mentioned in earlier tutorials, buffers in bellplay~ are simply nested lists of key-value pairs.
This script illustrates how to construct and use an nth-order Markov model from MIDI data in bellplay~ for generative music.
Sometimes it's useful to include markers in the transcription.
An example of JI-based retuning of MIDI events.
Randomness is frequently used in algorithmic music to introduce variation and unpredictability. However, in many compositional workflows, reproducibility is just as important. In bellplay\~, there are two types of random functions:
A basic example of temporal quantization, where transient-based segments are temporally shifted to align with a rhythmic grid.
An example of basic audio mosaicking in bellplay\~, where a target audio file is reconstructed using segments drawn from a small audio corpus.
In many cases, we will want to have some kind of DAW-style automation of certain parameters when generating or processing audio in our scripts.