|
Ecasound documentation - examples
I've used the text-mode version (ecasound) in all these
examples, but qtecasound - Qt-interface to ecasound - could have also
been used. It understands the same command line paremeters as the
text-mode version.
Format conversions
- ecasound -i:somefile.wav -o:somefile.cdr
- ecasound -i somefile.wav -o somefile.cdr
These do the same thing, convert somefile.wav to
somefile.cdr. As no chains are specified, default
chain is used.
- ecasound -a:1,2 -i somefile.wav -a:1 -o somefile.cdr -a:2 -o somefile.mp3
This is not a very useful example, but hopefully helps to understand
the way chains work. First, two new chains 1 and 2
(you can also use strings: -a:some_name_with_no_whitespaces,some_other_name) are created.
They are now the active chains. After this, input somefile.wav is
connected to both these chains. The rest follows the same scheme.
Chain 1 is set active and output somefile.cdr is
attached to it. In the same way, somefile.mp3 is attached to
chain 2.
- ecasound -c -i somefile.wav -o somefile.cdr
- qtecasound -i somefile.wav -o somefile.cdr
Like before, but ecasound is now started in interactive mode.
Specified or not, qtecasound is always started in interactive mode.
Realtime outputs (soundcard playback)
- ecasound somefile.wav
- ecasound -i somefile.wav
- ecasound -i:somefile.wav
- ecasound -i somefile.wav -o /dev/dsp
If you haven't touched your ~/.ecasoundrc configuration file,
these should all do the same thing, output somefile.wav to
/dev/dsp using the default chain. If no inputs are
specified, ecasound tries to use the first non-option argument on the
command line as a default input. If no chains are specified, the chain
default is created and set active. If no outputs are specified,
the default-output defined in ~/.ecasoundrc is used. This is
normally /dev/dsp.
- ecasound -i somefile.mp3 -o alsahw,0,0
- ecasound -i somefile.mp3 -o alsaplugin,0,0
- ecasound -i somefile.mp3 -o alsa,soundcard_name
The ALSA drivers have a somewhat different option syntax. You
first specify either "alsahw" (to indicate you want use the
ALSA direct hw interface) or "alsaplugin" (for ALSA plugin layer),
and then specify the card number and the device number (optionally
also subdevice can be given). The plugin layer is able to handle
some type conversions automatically. The third option is specific
to ALSA 0.9.x (and newer). 'soundcard_name' must be defined in the
ALSA configuration files (either ~/.asoundrc or the global settings
file). Otherwise ALSA inputs/outputs work just like OSS-devices.
- mpg123 -s sometune.mp3 | ecasound -i:stdin -o alsahw,0,0
Send the output of mpg123 to standard output (-s option) and
read it from standard input with ecasound (-i:stdin option). If you
want to use native ALSA support with OSS-programs, this is
one way to do it easily. This can also be used to add effects
to standard streams containing audio data.
Realtime inputs (recording from soundcard)
- ecasound -i:/dev/dsp0 -o somefile.wav
- ecasound -i:/dev/dsp0 -o somefile.wav -c
- ecasound -i alsahw,1,0 -o somefile.wav
These are simple examples of recording. Notice that when recording
it's often useful to run ecasound in interactive mode (-c).
Effect processing
Ecasound is an extremely versatile tool when it comes to effect
processing. After all, it was originally programmed for non-realtime
dsp processing. Because of this, these examples just scratch
the surface.
- ecasound -i somefile.mp3 -o /dev/dsp -ea:120
- ecasound -a:default -i somefile.mp3 -o /dev/dsp -ea:120
Let's start with a simple one. These do the same thing: mp3 input,
OSS output and an amplify effect, which amplifies the signal
by 120%, are added to the default chain.
- ecasound -i somefile.mp3 -o /dev/dsp -etr:40,0,55 -ea:120
Like the previous example, but now a reverb effect, with a delay of 40
milliseconds, surround disabled and mix-% of 55, is added to chain
before the amplify effect. In other words the signal is first
processed with the reverb and then amplified. This way you can add
as many effects as you like. If you ran out of CPU power, you can
always use output to a file.
- ecasound -a:1,2 -i somefile.mp3 -a:all -o /dev/dsp \
-a:1 -etr:40,0,55 -ea:120 \
-a:2 -efl:400
Ok, let's do some paraller processing. This time two chains are
created and the input file is assigned to both of them. The output
file is assigned to a special chain called all. -a:1,2
would also work. This way we can use one signal in multiple chains
and process each chains with different effects. You can create as many
chains as you want.
Using controller sources with effects
- ecasound -i somefile.wav -o /dev/dsp -ef3:800,1.5,0.9 -kos:1,400,4200,0.2,0 -kos:2,0.1,1.5,0.15,0
- ecasound -i somefile.wav -o /dev/dsp -ef3:800,1.5,0.9 -km:1,400,4200,74,0 -km:2,0.1,1.5,71,0
The first example uses two sine oscillators
(-kos:parameter,range_low,range_high,speed_in_Hz,initial_phase)
to control a resonant lowpass filter. The cutoff frequency varies
betweeen 400 and 4200 Hz, while resonance varies between 0.1 and 1.5.
The initial phase is 0 (times pi). The second example uses MIDI continuous
controllers
(-km:parameter,range_low,range_high,controller_number,midi-channel)
as controller sources. The ranges are the same as in the
in first example. Controller numbers used are 74 (cutoff) and 71
(resonance). In other words you can use your synth's cutoff and
resonance knobs.
It's also possible to control controllers with other controllers
using the -kx option. Normally when you add a controller,
you're controlling the last specified chain operator. -kx
changes this. Let's take an example:
- ecasound -i file.wav -o /dev/dsp -ea:100 -kos:1,0,100,0.5,0 -kx -kos:4,0.1,5,0.5,0
Same as before, but now another 0.5Hz sine oscillator is controlling
the frequency of the first oscillator.
- ecasound -i file.wav -o /dev/dsp -ef3:1000,1.0,1.0 -kos:1,500,2000,1,0 \
-kos:2,0.2,1.0,0.5,0 \
-kx -km:1,0.1,1.5,2,0
Ok, let's get real whacky. Here a 1Hz sine oscillator is assigned to
the cutoff frequency, while other controller is controlling resonance.
Now we add a MIDI-controller, that controls the second sine
oscillator.
Multitrack recording
- ecasound -c -b:256 -r -f:16,2,44100 \
-a:1 -i monitor-track.wav -o /dev/dsp \
-a:2 -i /dev/dsp -o new-track.wav
It really is this simple. To minimize syncronization problems,
a small buffersize is set with -b:buffer_size_in_samples.
This time I set it to 256 samples. To ensure flawless recording,
runtime priority is risen with -r. Then a default sample format
is set with -f:bits,channels,sample_rate. Now all that's left
is to specify two chains: one for monitoring and one for recording.
When using the above command, you need to have some way of monitoring
the signal that's been recorded. A common way is to enable
hw-monitoring (unmute/adjust the line-in level from your mixer app).
If you want to use ecasound for monitoring, you have to add a separate
chain for it:
- ecasound -c -b:256 \
-a:1 -i monitor-track.wav \
-a:2,3 -i /dev/dsp \
-a:2 -o new-track.wav \
-a:1,3 -o /dev/dsp
One thing to note that there are some differences in how OSS soundcard
drivers handle full-duplex (playback and recording at the same time)
operation. Some drivers allow the same device to be opened multiple
times (like in above example we open '/dev/dsp' once for recording
and once for playback.
You can always do test recordings until you find the optimal volume
levels (using the soundcard mixer apps and adjusting source volume),
but ecasound offers a better way to do this. This is a bit ugly,
but what's most important, it works in text-mode:
- ecasound -c -f:16,2,44100 -a:1 -i /dev/dsp0 -o /dev/dsp3 -ev
Basicly this just records from one OSS input, puts the signal through
an analyze (-ev) effect and outputs to an OSS output. The secret
here is that you can get volume statistics with the estatus (or
es) command in interactive mode. Qtecasound also offers
a estatus pushbutton. This way you can adjust the mixer
settings, check the statistics (after which they're reseted), adjust
again, check statistics, ... and so on. Newer ecasound versions (1.8.5
and newer) come with 'ecasignalview', which is a standalone app that
can monitor signal level in realtime.
Mixing
Here's a few real-life mixdown examples.
- ecasound -c \
-a:1 -i drums.wav \
-a:2 -i synth-background.wav \
-a:3 -i bass-guitar_take-2.ewf \
-a:4 -i brass-house-lead.wav \
-a:all -o /dev/dsp
First of all, interactive-mode is selected with -c. Then
four inputs (all stereo) are added. All four chains are then assigned
to one output, which this time is the soundcard (/dev/dsp). That's
all.
- ecasound -c -r -b:2048 \
-a:1,5 -i drums.wav -ea:200 \
-a:2,6 -i synth-background.wav -epp:40 -ea:120 \
-a:3,7 -i bass-guitar_take-2.ewf -ea:75 \
-a:4,8 -i brass-house-lead.wav -epp:60 -ea:50 \
-a:1,2,3,4 -o /dev/dsp \
-a:5,6,7,8 -o current_mix.wav
This second example is more complex. The same inputs are used, but
this time effects (amplify -ea:mix_percent and normal
pan -epp:left_right_balance) are also used. First four chains are
assigned to the soundcard output as in the first example, but now we
also have another set of chains that are assigned to a WAVE file
current_mix.wav. In this example, runtime priority is also
risen with -r. A bigger buffersize is also used.
Cut, copy and paste
- ecasound -i bigfile.wav -o part1.wav -t:60.0
- ecasound -i bigfile.wav -y:60.0 -o part2.wav
Here's a simple example where first 60 seconds of
bigfile.wav is written to part1.wav and the rest to
part2.wav. If you want to combine these files back to
one big file:
- ecasound -i part2.wav -o part1.wav -y:500
part2.wav is appended to part1.wav.
Multichannel processing
You need to worry about channel routing only if input and
output channel counts don't match. Here's how you
divide a 4-channel audio file into 4 mono files.
- ecasound -a:1,2,3,4 -i 4-channel-file.raw \
-a:1 -f:16,1,44100 -o mono-1.wav \
-a:2 -f:16,1,44100 -o mono-2.wav -erc:2,1 \
-a:3 -f:16,1,44100 -o mono-3.wav -erc:3,1 \
-a:4 -f:16,1,44100 -o mono-4.wav -erc:4,1
Signal routing through external devices
- ecasound -c -b:128 -r -f:16,2,44100 \
-a:1 -i source-track.wav -o /dev/dsp3 \
-a:2 -i /dev/dsp0 -o target-track.wav
So basicly, this is just like multirack recording. The only difference
is that realtime input and output are externally connected.
Presets and LADSPA effect plugins
- ecasound -i null -o /dev/dsp -el:sine_fcac,440,1
This produces a 440Hz sine tone (great for tuning your instruments!).
For the above to work, LADSPA SDK needs to be installed (see
www.ladspa.org).
- ecasound -i:null -o:/dev/dsp -el:sine_fcac,880,1 -eemb:120,10
-efl:2000
This results in audible metrome signal with tempo of 120BPM. Now
the syntax might look a bit difficult for everyday use. Luckily
ecasound's preset system will help in this situation. You can get the
same exact result with:
- ecasound -i:null -o:/dev/dsp -pn:metronome,120
See the file 'effect_presets' for a list of available effect
presets. By default, location of this file is '/usr/local/share/ecasound/effect_presets'.
|