So I'm getting a grip on using csound, and it's going pretty well.
There's just two places I'm hung up.
I can't apply effects to an incoming signal. I'd like to (as a simple example) apply chorusing and a light reverb to my own singing, an external synth, mic'ed guitar, etc. coming into csound, preferably in realtime so I can hear myself, and I'd also like to record it. I've had plenty of success putting global effects on traditional oscillator-based instruments within csound, and I can get my line in to work with virtually no latency just by tinkering with buffers and such in the csoptions field - but I can't get the signal modified.
I set a buffer and add -iadc and -odac. That works great, for the unprocessed input.
I've tried adding "-+realtimeaudio=alsa", but it doesn't appear to make a difference. (messages when I run csound appear to indicate realtime alsa's on anyway)
I have tried the "in" opcode as that appears to be the right place to get a mono signal into csound, and I am using something like "a1" as the argument, and I have a feeling this is part of the problem.
This is on an olpc from the console, as root - not that I have any other choice.
I am at work at the moment, but I will post an example later if someone needs to see what I'm talking about in more detail.
My other question is re: outputting multiple audio files when rendering to an audio file. Previously I'd used audacity or ecasound and worked with multiple audio files (one for each instrument, or maybe a submix for drums), then used effects and edited them appropriately, then mixed them all down to a single file. I'd love to do all my audio work in csound with a similar process. Is this possible, or am I better off just going back and forth between csound and ecasound? Ecasound is great for simple recording, but I really prefer csound over bash scripting for any kind of processing or mixing.