I have just discovered Serdar Camlica works via Videoart.net, so organic and synthetic!
awaken 2006 aug release animation of the year award at the 2006 digifestival.net italy.
His Web site 3dfiction.com
Friday, November 17, 2006
Sunday, November 12, 2006
Das apokalyptische weib
The vvvv site has some pictures on the project das apokalyptische weib. I put this information in this blog for two reasons: vvvv is a toolkit for real time video synthesis, thus it is another tool to consider, and I had with my colleague of HplanK the same performance experience several years ago. For the celebration of Manessier, Didier Debril made a composition (computer and syntheizers) and I created 3D scenes (VRML97) that were projected inside a church. Projectors were also used to light stained glass windows made by Manessier in this church.
mp3 and pictures are on line here (in French)
Friday, November 10, 2006
Reactable improvisation
The reactable again. A Long improvisation that sounds like an analog modular synthetizer.
Tuesday, October 31, 2006
NeoFormation
NeoFormation
AudioVideoPainting made with HPKComposer, no samples, no score, csound audio generators only.
HPKComposer and the Csound API
Since the version 5 of csound, a rich API has been developed for use by many languages (C++, python, java, lua...). The csound developers have done a great job, and has made possible to build really sophisticated application that embeds the csound engine.
HPKComposer is based on that capability. I wanted a common paradigm for controlling the evolution of both visual and audio parameters. This paradigm is the use of looped linear segments: it is simple and different cycle durations provide an ever changing result. Csound has an opcode for doing that (loopseg) and that is fine: as sound parameters must be refreshed at a much higher rate than the graphical one, the audio engine must drive the graphical world. This is made possible by another csound 5 feature: the software bus and the capability to export a global variable as a channel of the bus. This global variable can be read by the host program using the csound API, and this is how the loopseg values are passed to HPKComposer for controlling the graphical evolution.
Thus HPKComposer real time performance program is a java program using two threads:
Another interesting feature of the csound API is the possibility to advance the csound execution engine for a precise amount of samples. It is perfect for writing image buffer to disk: the csound thread run the audio engine for the number of samples required for a video frame, the main thread read the loopseg values, calculate the image and draw it to disk. This is how I produce the sequence of images that VirtualDub will transform in a video.
HPKComposer is based on that capability. I wanted a common paradigm for controlling the evolution of both visual and audio parameters. This paradigm is the use of looped linear segments: it is simple and different cycle durations provide an ever changing result. Csound has an opcode for doing that (loopseg) and that is fine: as sound parameters must be refreshed at a much higher rate than the graphical one, the audio engine must drive the graphical world. This is made possible by another csound 5 feature: the software bus and the capability to export a global variable as a channel of the bus. This global variable can be read by the host program using the csound API, and this is how the loopseg values are passed to HPKComposer for controlling the graphical evolution.
Thus HPKComposer real time performance program is a java program using two threads:
- the master thread is controlling all the image processing and read the values generated by the csound engine
- the secondary thread, created by the master thread, run the csound engine
Another interesting feature of the csound API is the possibility to advance the csound execution engine for a precise amount of samples. It is perfect for writing image buffer to disk: the csound thread run the audio engine for the number of samples required for a video frame, the main thread read the loopseg values, calculate the image and draw it to disk. This is how I produce the sequence of images that VirtualDub will transform in a video.
Monday, October 30, 2006
Kaoss Pad 3 controls motion graphics
YouTube via matrixsynth blog. The Kaoss pad 3 provides nice visual feddback.
Friday, October 27, 2006
TerraFluid
TerraFluid
This AudioVideoPainting is made with HPKComposer. Sounds are only generated with csound audio generators, no samples and no score.
HPKComposer
HPKComposer is a tool I am writing for experimenting "Audio Video paintings". It is based on the Csound 5.0 synthesis engine, and on an image transformation engine, written using a java OpenGL library (LWJGL), and exploiting GLSL pixel shader .
The composition principles enforced by this tool are the following
The composition principles enforced by this tool are the following
- the composition is generated in real time, both the audio and the video parts,
- a composition is made of layers, each layer combining audio synthesis made using csound and image processing using pixel shader. Each layer has a transparency value that can be modulated. Linking this value to the audio volume is the primary way for creating a composition,
- segment curves are used for controlling the evolution of image processing and audio synthesis parameters. These curves loop when the last point is reached and, by using segments of different lengths, will give an ever changing result,
- it is possible to interact live with "Audio Video paintings" using MIDI controllers.
Introduction to Audio Video Synthesis Blog
This blog is about audio and video synthesis. It is more and more feasible to use software on personal computers for generating, in real time, audio and video composition. As more composers can explore this field of mixing audio and video in a seamless environment, this blog will focus on their works and the tools they use.
Subscribe to:
Posts (Atom)