Since the version 5 of csound, a rich API has been developed for use by many languages (C++, python, java, lua...). The csound developers have done a great job, and has made possible to build really sophisticated application that embeds the csound engine.
HPKComposer is based on that capability. I wanted a common paradigm for controlling the evolution of both visual and audio parameters. This paradigm is the use of looped linear segments: it is simple and different cycle durations provide an ever changing result. Csound has an opcode for doing that (loopseg) and that is fine: as sound parameters must be refreshed at a much higher rate than the graphical one, the audio engine must drive the graphical world. This is made possible by another csound 5 feature: the software bus and the capability to export a global variable as a channel of the bus. This global variable can be read by the host program using the csound API, and this is how the loopseg values are passed to HPKComposer for controlling the graphical evolution.
Thus HPKComposer real time performance program is a java program using two threads:
the master thread is controlling all the image processing and read the values generated by the csound engine
the secondary thread, created by the master thread, run the csound engine
This is working nicely, very stable and smooth.
Another interesting feature of the csound API is the possibility to advance the csound execution engine for a precise amount of samples. It is perfect for writing image buffer to disk: the csound thread run the audio engine for the number of samples required for a video frame, the main thread read the loopseg values, calculate the image and draw it to disk. This is how I produce the sequence of images that VirtualDub will transform in a video.
HPKComposer is a tool I am writing for experimenting "Audio Video paintings". It is based on the Csound 5.0 synthesis engine, and on an image transformation engine, written using a java OpenGL library (LWJGL), and exploiting GLSL pixel shader .
The composition principles enforced by this tool are the following
the composition is generated in real time, both the audio and the video parts,
a composition is made of layers, each layer combining audio synthesis made using csound and image processing using pixel shader. Each layer has a transparency value that can be modulated. Linking this value to the audio volume is the primary way for creating a composition,
segment curves are used for controlling the evolution of image processing and audio synthesis parameters. These curves loop when the last point is reached and, by using segments of different lengths, will give an ever changing result,
it is possible to interact live with "Audio Video paintings" using MIDI controllers.
This tool is not publicly available, as I am short of time for supporting it, but I will post some works done with it, hoping it is a way to exchange ideas.
This blog is about audio and video synthesis. It is more and more feasible to use software on personal computers for generating, in real time, audio and video composition. As more composers can explore this field of mixing audio and video in a seamless environment, this blog will focus on their works and the tools they use.