Byte Archives - Multithreaded Audio for Games
Byte Archives
Light Mode
ANON: So I am use portaudio to do audio for my game project because I wanted to do manual audio mixing, sterio panning etc. at least once before I use a library that does all this for me at least once. The portaudio library works by calling a callback in which I give it the audio samples for a indeterminate amount of time, which probably gets called on another thread. The documentation tells me I should not do anything that can block in this thread, so no malloc(), free() and also no mutex. How do I safely get my data that tells me what sounds to play into this thread? Atomic cpu instructions are the only way to communicate, right? like atomic compare and swap for ex. ANON: I can think of a way to do that with an atomic pointer swap, but avoiding the free() in the audio thread makes this way unreasonably complex I think. Is there something I'm missing? Is there an simpler solution? Martins: you can create two queues - one for passing buffers to mixer thread, one for receiving finished buffers. Both queues will be single-produced-single-consumer, so pretty easy to implement. Then in main/update thread you can get buffer from "finished" queue and either free it or recycle for next operation ANON: I've been searching a bit online, but I can't really find any info on standard solutions to the audio mixer thread problem. I was sure there had to be a more fitting solution to this, since multithreading queues are like passing events to threads and events do not really fit the problem of controlling a number of audio sources. Having every single creation, deletion and update of an audio source be an event in a queue seems wrong. ANON: I can eliminate the malloc and free by having a max count of sounds, with a active_count, and deleting sounds by just swapping them with the one at [active_count] and decrementing active_count. Martins: I would suggest mixing on main thread, leave this audio thread only as place where to copy data from your mixed buffer into api buffer/data Martins: simplest solution would be to have just one circular buffer (always alive) and two pointers into it - reading and writing offsets. then make sure reading never goes past writing offset. If it does, it means main thread cannot produce data fast enough, so you need to decide either to produce 0 output, or stop it, or smth ANON: I guess that would work to move the complexity out of the thread, but isn't the point of the audio thread that it handles the generation of the data, so that the main thread pausing does not cause audio glitches? Martins: you can always pregenerate more data than expected Martins: and on next frame overwrite unused parts with new info ANON: You would have to produce at least one frame time worth of audio data in the main thread. ANON: But actually more to deal with frametime spikes Martins: right Martins: this is the reason they say not use mutexes in audio thread callback Martins: otherwise same thing will happen ANON: But if I generate more then I cannot respond to changes on a frame to frame basis anymore, unless I recalculate the data each frame. Martins: main thread will take a lock Martins: and pause for some reason Martins: thus the audio glitch Martins: this approach is no different from using mutexes Martins: yes, you'll need to recalcuate data Martins: generate 3 frames of data (or smth) Martins: and in next frame see where you are in time Martins: and overwrite this 2nd or 3rd frame of data Martins: more granular obviously, based on where audio callback thread is currently reading from buffer Martins: It's an interesting solution. Martins: there is no good way to avoid glitches on audio on desktop OS'es because they can pause your process/threads at any time for any period of time Martins: But it avoids the multithreading problems while introducing others. Martins: I think there should be a way to have the audio thread handle the mixing though. Have it keep a set of audio sources and somehow update these from the main thread. ANON: I can see now why one would want to use an audio libary and not to the mixing themselves, not because the mixing is hard, but because the mixing needs to happen with heavy constraints.

Audio Thread Management in Game Development

The challenge of managing audio mixing in game development presents several key considerations, particularly when using PortAudio's callback-based system.

Core Requirements

Thread Safety Constraints

Implementation Approaches

Single-Buffer Solution

A circular buffer implementation offers a straightforward approach:


struct AudioBuffer {
    float* data;
    atomic readOffset;
    atomic writeOffset;
    size_t capacity;
};
    

Main Thread Mixing

This approach moves mixing complexity to the main thread:

Audio Thread Mixing

For handling mixing in the audio thread:


struct AudioSource {
    float* samples;
    atomic active;
    atomic volume;
    atomic pan;
};
    

Performance Considerations

Buffer Management

Latency vs Stability

Approach Latency Stability Complexity
Main Thread Mixing Higher Better Lower
Audio Thread Mixing Lower Variable Higher

Technical Limitations

Desktop operating systems can pause processes at any time, making perfect audio continuity challenging. The choice between main thread and audio thread mixing becomes a trade-off between:

The complexity of these requirements often justifies using established audio libraries that have already solved these threading and mixing challenges.