sábado, 20 de febrero de 2016

Portfolio

This blog is going to end soon. From now on I'm going to focus on my portfolio on cargollective: http://cargocollective.com/ferrerabertran

domingo, 25 de octubre de 2015

Berlin's Jazz Picnic

Starting from summer 2015 I have been enjoying a new event in Berlin called "Jazz Picnic". As a regular attendee and participant I am having the pleasure of meeting many musicians from all over the world, play together and exchange ideas. The format of the event is pretty minimalistic: get toghether and play jazz tunes. The picnic is held regularly on Sundays at Melitta Sundström. It is also on meetup.



Kommerz Was Yesterday

I had the pleasure to collaborate with artist Claire Waffel in her project "Kommerz Was Yesterday". We performed as part of the first Off-Biennial Budapest thanks to Igor Metropol. We also performed on a "Sundowner" event in Berlin, next to the Neue Nationalgalerie.

This collaboration as a performance sound artist has allowed me to broaden my musical language and technique, and I'm really looking forward to continue doing this kind of work.

"Kommerz Was Yesterday" unravels the past and future of a former grand Budapest shopping centre - now a building deteriorated into disrepair. Collages that are produced in real time next to digital sound loops. 



miércoles, 8 de abril de 2015

Concerts in Berlin

It's been almost three years now since I moved to Berlin. A lot of interesting things happened since then, especially regarding my musical projects:

Under my solo show "The Cat Talked" I performed on the street in Prenzlauer Berg thanks to Fabrik der Kulturen. Besides other small concerts, I also played at Madame Claude, one of the best bars for live indie music.



Together with Guy Boldon on the drums I presented "Peppercat" at Marie Antoinette. This is probably the personal project I'm more fond of so far.




As bassist of Moon Milk for Cancer Cat I had the chance to play at Berlin's best venues: Sage club, Lido, ... Even dressed up like a penguin!



And somehow feeling this is just the beginning!

miércoles, 31 de marzo de 2010

Virtual Theremin

I have been working in a prototype of a synthesizer that converts the output of a webcam into sound. The name 'Virtual Theremin' is not very original as there appears to be other similar things exactly named like that - I am very bad for names. I will talk about them later.

You can see how the VT looks like in this demo videos I just recorded.

So how does it work?

To illustrate how the VT is implemented, let's imagine our friend Felix the cat is the input of the VT in a certain moment in time. Then, the state of the wave function will be obtained by computing two projections: the one in the X axis and the one in the Y axis. To compute the projections I just consider p(x) = sum(img(x, y)) for all possible y values, and viceversa. Each projection has different meaning for the computation of the wave form. The projection in Y is used to represent a spectral analysis whereas the projection in X is used to calculate a base frequency. For the spectral analysis I just discretize the projection to a reasonable limit (16 partials) and for the base frequency I compute the mean of the distribution of the projection.



Before doing the projections, I binarize the image and optionally apply gradient detection to it.

Hasn't someone done something like that?

I am aware that very similar things exist. There are very complex proprietary pieces of software that can do almost anything with live video. Speaking about simple software like mine, this and this are using a different approach: they are getting the input of the waveform inside the webcam output, whereas in the VT the whole webcam output is the input of the waveform. I think this is a big difference. It probably has been simpler to implement (otherwise you have to first recognize some kind of object to get some input) and it leads to infinite possibilities (any image can be the input of a waveform).

Finally, this thing implemented using the Wii is really cool.

miércoles, 27 de enero de 2010

Algorithmic music generation

Because I have studied music as well as computer engineering, I am interested in all topics that involve computer science and music. I have researched a little bit on algorithmic composition and created a tool that can generate music based on cyclic series. It is inspired by the theory of serialism, specially the theory behind dodecaphony.


The tool pretends to answer the question: can beautiful music be created by only using the rules of serialism?

In short, what this tool does is generate music by using a simple rule: repeating a series of notes in such a way that none of the notes is more important than the rest. You generate a series of notes and the program will play it according to some parameters. You can shuffle the serie from time to time, transpose it, use retrograde or inverted series and even assign basic series of durations and intensities.

The resulting MIDI file can be opened with any music editor and mixed with your favourite instruments.

When doing algorithmic composition programming, there is always the risk of ending up creating yet another random music generator (though there are some funny random music generators out there, and even serious projects with awesome mechanics involved). In my case I decided to create something simple and controllable, so that you always know what's going on and can therefore change parameters based on something, not just on pure chance.