|
SDL3pp
A slim C++ wrapper for SDL3
|
Audio functionality for the SDL library. More...

Classes | |
| struct | SDL::AudioDeviceParam |
| Safely wrap AudioDevice for non owning parameters. More... | |
| struct | SDL::AudioStreamParam |
| Safely wrap AudioStream for non owning parameters. More... | |
| class | SDL::AudioFormat |
| Audio format. More... | |
| class | SDL::AudioDevice |
| SDL Audio Device instance IDs. More... | |
| struct | SDL::AudioDeviceRef |
| Semi-safe reference for AudioDevice. More... | |
| class | SDL::AudioStream |
| The opaque handle that represents an audio stream. More... | |
| struct | SDL::AudioStreamRef |
| Semi-safe reference for AudioStream. More... | |
Typedefs | |
| using | SDL::AudioFormatRaw = SDL_AudioFormat |
| Alias to raw representation for AudioFormat. | |
| using | SDL::AudioDeviceID = SDL_AudioDeviceID |
| Alias to raw representation for AudioDevice. | |
| using | SDL::AudioStreamRaw = SDL_AudioStream * |
| Alias to raw representation for AudioStream. | |
| using | SDL::AudioSpec = SDL_AudioSpec |
| Format specifier for audio data. More... | |
| using | SDL::AudioPostmixCallback = SDL_AudioPostmixCallback |
| A callback that fires when data is about to be fed to an audio device. More... | |
| using | SDL::AudioPostmixCB = std::function< void(const AudioSpec &spec, std::span< float > buffer)> |
| A callback that fires when data is about to be fed to an audio device. More... | |
| using | SDL::AudioStreamCallback = SDL_AudioStreamCallback |
| A callback that fires when data passes through an AudioStream. More... | |
| using | SDL::AudioStreamCB = std::function< void(AudioStreamRef stream, int additional_amount, int total_amount)> |
| A callback that fires when data passes through an AudioStream. More... | |
Functions | |
| constexpr AudioFormat | SDL::DefineAudioFormat (bool sign, bool bigendian, bool flt, Uint16 size) |
| Define an AudioFormat value. More... | |
| constexpr Uint16 | SDL::AudioBitSize (AudioFormatRaw x) |
| Retrieve the size, in bits, from an AudioFormat. More... | |
| constexpr Uint16 | SDL::AudioByteSize (AudioFormatRaw x) |
| Retrieve the size, in bytes, from an AudioFormat. More... | |
| constexpr bool | SDL::IsAudioFloat (AudioFormatRaw x) |
| Determine if an AudioFormat represents floating point data. More... | |
| constexpr bool | SDL::IsAudioBigENDIAN (AudioFormatRaw x) |
| Determine if an AudioFormat represents bigendian data. More... | |
| constexpr bool | SDL::IsAudioLittleEndian (AudioFormatRaw x) |
| Determine if an AudioFormat represents littleendian data. More... | |
| constexpr bool | SDL::IsAudioSigned (AudioFormatRaw x) |
| Determine if an AudioFormat represents signed data. More... | |
| constexpr bool | SDL::IsAudioInt (AudioFormatRaw x) |
| Determine if an AudioFormat represents integer data. More... | |
| constexpr bool | SDL::IsAudioUnsigned (AudioFormatRaw x) |
| Determine if an AudioFormat represents unsigned data. More... | |
| constexpr int | SDL::AudioFrameSize (const AudioSpec &x) |
| Calculate the size of each audio frame (in bytes) from an AudioSpec. More... | |
| int | SDL::GetNumAudioDrivers () |
| Use this function to get the number of built-in audio drivers. More... | |
| const char * | SDL::GetAudioDriver (int index) |
| Use this function to get the name of a built in audio driver. More... | |
| const char * | SDL::GetCurrentAudioDriver () |
| Get the name of the current audio driver. More... | |
| OwnArray< AudioDeviceRef > | SDL::GetAudioPlaybackDevices () |
| Get a list of currently-connected audio playback devices. More... | |
| OwnArray< AudioDeviceRef > | SDL::GetAudioRecordingDevices () |
| Get a list of currently-connected audio recording devices. More... | |
| const char * | SDL::GetAudioDeviceName (AudioDeviceParam devid) |
| Get the human-readable name of a specific audio device. More... | |
| AudioSpec | SDL::GetAudioDeviceFormat (AudioDeviceParam devid, int *sample_frames=nullptr) |
| Get the current audio format of a specific audio device. More... | |
| OwnArray< int > | SDL::GetAudioDeviceChannelMap (AudioDeviceParam devid) |
| Get the current channel map of an audio device. More... | |
| AudioDevice | SDL::OpenAudioDevice (AudioDeviceParam devid, OptionalRef< const AudioSpec > spec) |
| Open a specific audio device. More... | |
| bool | SDL::IsAudioDevicePhysical (AudioDeviceParam devid) |
| Determine if an audio device is physical (instead of logical). More... | |
| bool | SDL::IsAudioDevicePlayback (AudioDeviceParam devid) |
| Determine if an audio device is a playback device (instead of recording). More... | |
| void | SDL::PauseAudioDevice (AudioDeviceParam devid) |
| Use this function to pause audio playback on a specified device. More... | |
| void | SDL::ResumeAudioDevice (AudioDeviceParam devid) |
| Use this function to unpause audio playback on a specified device. More... | |
| bool | SDL::AudioDevicePaused (AudioDeviceParam devid) |
| Use this function to query if an audio device is paused. More... | |
| float | SDL::GetAudioDeviceGain (AudioDeviceParam devid) |
| Get the gain of an audio device. More... | |
| void | SDL::SetAudioDeviceGain (AudioDeviceParam devid, float gain) |
| Change the gain of an audio device. More... | |
| void | SDL::CloseAudioDevice (AudioDeviceID devid) |
| Close a previously-opened audio device. More... | |
| void | SDL::BindAudioStreams (AudioDeviceParam devid, std::span< AudioStreamRef > streams) |
| Bind a list of audio streams to an audio device. More... | |
| void | SDL::BindAudioStream (AudioDeviceParam devid, AudioStreamParam stream) |
| Bind a single audio stream to an audio device. More... | |
| void | SDL::UnbindAudioStreams (std::span< AudioStreamRef > streams) |
| Unbind a list of audio streams from their audio devices. More... | |
| void | SDL::UnbindAudioStream (AudioStreamParam stream) |
| Unbind a single audio stream from its audio device. More... | |
| AudioDeviceRef | SDL::GetAudioStreamDevice (AudioStreamParam stream) |
| Query an audio stream for its currently-bound device. More... | |
| AudioStream | SDL::CreateAudioStream (OptionalRef< const AudioSpec > src_spec, OptionalRef< const AudioSpec > dst_spec) |
| Create a new audio stream. More... | |
| PropertiesRef | SDL::GetAudioStreamProperties (AudioStreamParam stream) |
| Get the properties associated with an audio stream. More... | |
| void | SDL::GetAudioStreamFormat (AudioStreamParam stream, AudioSpec *src_spec, AudioSpec *dst_spec) |
| Query the current format of an audio stream. More... | |
| void | SDL::SetAudioStreamFormat (AudioStreamParam stream, OptionalRef< const AudioSpec > src_spec, OptionalRef< const AudioSpec > dst_spec) |
| Change the input and output formats of an audio stream. More... | |
| float | SDL::GetAudioStreamFrequencyRatio (AudioStreamParam stream) |
| Get the frequency ratio of an audio stream. More... | |
| void | SDL::SetAudioStreamFrequencyRatio (AudioStreamParam stream, float ratio) |
| Change the frequency ratio of an audio stream. More... | |
| float | SDL::GetAudioStreamGain (AudioStreamParam stream) |
| Get the gain of an audio stream. More... | |
| void | SDL::SetAudioStreamGain (AudioStreamParam stream, float gain) |
| Change the gain of an audio stream. More... | |
| OwnArray< int > | SDL::GetAudioStreamInputChannelMap (AudioStreamParam stream) |
| Get the current input channel map of an audio stream. More... | |
| OwnArray< int > | SDL::GetAudioStreamOutputChannelMap (AudioStreamParam stream) |
| Get the current output channel map of an audio stream. More... | |
| void | SDL::SetAudioStreamInputChannelMap (AudioStreamParam stream, std::span< int > chmap) |
| Set the current input channel map of an audio stream. More... | |
| void | SDL::SetAudioStreamOutputChannelMap (AudioStreamParam stream, std::span< int > chmap) |
| Set the current output channel map of an audio stream. More... | |
| void | SDL::PutAudioStreamData (AudioStreamParam stream, SourceBytes buf) |
| Add data to the stream. More... | |
| int | SDL::GetAudioStreamData (AudioStreamParam stream, TargetBytes buf) |
| Get converted/resampled data from the stream. More... | |
| int | SDL::GetAudioStreamAvailable (AudioStreamParam stream) |
| Get the number of converted/resampled bytes available. More... | |
| int | SDL::GetAudioStreamQueued (AudioStreamParam stream) |
| Get the number of bytes currently queued. More... | |
| void | SDL::FlushAudioStream (AudioStreamParam stream) |
| Tell the stream that you're done sending data, and anything being buffered should be converted/resampled and made available immediately. More... | |
| void | SDL::ClearAudioStream (AudioStreamParam stream) |
| Clear any pending data in the stream. More... | |
| void | SDL::PauseAudioStreamDevice (AudioStreamParam stream) |
| Use this function to pause audio playback on the audio device associated with an audio stream. More... | |
| void | SDL::ResumeAudioStreamDevice (AudioStreamParam stream) |
| Use this function to unpause audio playback on the audio device associated with an audio stream. More... | |
| bool | SDL::AudioStreamDevicePaused (AudioStreamParam stream) |
| Use this function to query if an audio device associated with a stream is paused. More... | |
| void | SDL::LockAudioStream (AudioStreamParam stream) |
| Lock an audio stream for serialized access. More... | |
| void | SDL::UnlockAudioStream (AudioStreamParam stream) |
| Unlock an audio stream for serialized access. More... | |
| void | SDL::SetAudioStreamGetCallback (AudioStreamParam stream, AudioStreamCallback callback, void *userdata) |
| Set a callback that runs when data is requested from an audio stream. More... | |
| void | SDL::SetAudioStreamGetCallback (AudioStreamParam stream, AudioStreamCB callback) |
| Set a callback that runs when data is requested from an audio stream. More... | |
| void | SDL::SetAudioStreamPutCallback (AudioStreamParam stream, AudioStreamCallback callback, void *userdata) |
| Set a callback that runs when data is added to an audio stream. More... | |
| void | SDL::SetAudioStreamPutCallback (AudioStreamParam stream, AudioStreamCB callback) |
| Set a callback that runs when data is added to an audio stream. More... | |
| void | SDL::DestroyAudioStream (AudioStreamRaw stream) |
| Free an audio stream. More... | |
| AudioStream | SDL::OpenAudioDeviceStream (AudioDeviceParam devid, OptionalRef< const AudioSpec > spec, AudioStreamCallback callback=nullptr, void *userdata=nullptr) |
| Convenience function for straightforward audio init for the common case. More... | |
| AudioStream | SDL::OpenAudioDeviceStream (AudioDeviceParam devid, OptionalRef< const AudioSpec > spec, AudioStreamCB callback) |
| Convenience function for straightforward audio init for the common case. More... | |
| void | SDL::SetAudioPostmixCallback (AudioDeviceParam devid, AudioPostmixCallback callback, void *userdata) |
| Set a callback that fires when data is about to be fed to an audio device. More... | |
| void | SDL::SetAudioPostmixCallback (AudioDeviceParam devid, AudioPostmixCB callback) |
| Set a callback that fires when data is about to be fed to an audio device. More... | |
| OwnArray< Uint8 > | SDL::LoadWAV (IOStreamParam src, AudioSpec *spec, bool closeio=false) |
| Load the audio data of a WAVE file into memory. More... | |
| OwnArray< Uint8 > | SDL::LoadWAV (StringParam path, AudioSpec *spec) |
| Loads a WAV from a file path. More... | |
| void | SDL::MixAudio (Uint8 *dst, SourceBytes src, AudioFormat format, float volume) |
| Mix audio data in a specified format. More... | |
| void | SDL::MixAudio (TargetBytes dst, SourceBytes src, AudioFormat format, float volume) |
| Mix audio data in a specified format. More... | |
| OwnArray< Uint8 > | SDL::ConvertAudioSamples (const AudioSpec &src_spec, SourceBytes src_data, const AudioSpec &dst_spec) |
| Convert some audio data of one format to another format. More... | |
| const char * | SDL::GetAudioFormatName (AudioFormatRaw format) |
| Get the human readable name of an audio format. More... | |
| int | SDL::GetSilenceValueForFormat (AudioFormatRaw format) |
| Get the appropriate memset value for silencing an audio format. More... | |
| constexpr Uint16 | SDL::AudioFormat::GetBitSize () const |
| Retrieve the size, in bits, from an AudioFormat. More... | |
| constexpr Uint16 | SDL::AudioFormat::GetByteSize () const |
| Retrieve the size, in bytes, from an AudioFormat. More... | |
| constexpr bool | SDL::AudioFormat::IsFloat () const |
| Determine if an AudioFormat represents floating point data. More... | |
| constexpr bool | SDL::AudioFormat::IsBigEndian () const |
| Determine if an AudioFormat represents bigendian data. More... | |
| constexpr bool | SDL::AudioFormat::IsLittleEndian () const |
| Determine if an AudioFormat represents littleendian data. More... | |
| constexpr bool | SDL::AudioFormat::IsSigned () const |
| Determine if an AudioFormat represents signed data. More... | |
| constexpr bool | SDL::AudioFormat::IsInt () const |
| Determine if an AudioFormat represents integer data. More... | |
| constexpr bool | SDL::AudioFormat::IsUnsigned () const |
| Determine if an AudioFormat represents unsigned data. More... | |
| const char * | SDL::AudioDevice::GetName () const |
| Get the human-readable name of a specific audio device. More... | |
| AudioSpec | SDL::AudioDevice::GetFormat (int *sample_frames=nullptr) const |
| Get the current audio format of a specific audio device. More... | |
| OwnArray< int > | SDL::AudioDevice::GetChannelMap () const |
| Get the current channel map of an audio device. More... | |
| bool | SDL::AudioDevice::IsPhysical () const |
| Determine if an audio device is physical (instead of logical). More... | |
| bool | SDL::AudioDevice::IsPlayback () const |
| Determine if an audio device is a playback device (instead of recording). More... | |
| void | SDL::AudioDevice::Pause () |
| Use this function to pause audio playback on a specified device. More... | |
| void | SDL::AudioDevice::Resume () |
| Use this function to unpause audio playback on a specified device. More... | |
| bool | SDL::AudioDevice::Paused () const |
| Use this function to query if an audio device is paused. More... | |
| float | SDL::AudioDevice::GetGain () const |
| Get the gain of an audio device. More... | |
| void | SDL::AudioDevice::SetGain (float gain) |
| Change the gain of an audio device. More... | |
| void | SDL::AudioDevice::Close () |
| Close a previously-opened audio device. More... | |
| void | SDL::AudioDevice::BindAudioStreams (std::span< AudioStreamRef > streams) |
| Bind a list of audio streams to an audio device. More... | |
| void | SDL::AudioDevice::BindAudioStream (AudioStreamParam stream) |
| Bind a single audio stream to an audio device. More... | |
| void | SDL::AudioStream::Unbind () |
| Unbind a single audio stream from its audio device. More... | |
| AudioDeviceRef | SDL::AudioStream::GetDevice () const |
| Query an audio stream for its currently-bound device. More... | |
| PropertiesRef | SDL::AudioStream::GetProperties () const |
| Get the properties associated with an audio stream. More... | |
| void | SDL::AudioStream::GetFormat (AudioSpec *src_spec, AudioSpec *dst_spec) const |
| Query the current format of an audio stream. More... | |
| void | SDL::AudioStream::SetFormat (OptionalRef< const AudioSpec > src_spec, OptionalRef< const AudioSpec > dst_spec) |
| Change the input and output formats of an audio stream. More... | |
| float | SDL::AudioStream::GetFrequencyRatio () const |
| Get the frequency ratio of an audio stream. More... | |
| void | SDL::AudioStream::SetFrequencyRatio (float ratio) |
| Change the frequency ratio of an audio stream. More... | |
| float | SDL::AudioStream::GetGain () const |
| Get the gain of an audio stream. More... | |
| void | SDL::AudioStream::SetGain (float gain) |
| Change the gain of an audio stream. More... | |
| OwnArray< int > | SDL::AudioStream::GetInputChannelMap () const |
| Get the current input channel map of an audio stream. More... | |
| OwnArray< int > | SDL::AudioStream::GetOutputChannelMap () const |
| Get the current output channel map of an audio stream. More... | |
| void | SDL::AudioStream::SetInputChannelMap (std::span< int > chmap) |
| Set the current input channel map of an audio stream. More... | |
| void | SDL::AudioStream::SetOutputChannelMap (std::span< int > chmap) |
| Set the current output channel map of an audio stream. More... | |
| void | SDL::AudioStream::PutData (SourceBytes buf) |
| Add data to the stream. More... | |
| int | SDL::AudioStream::GetData (TargetBytes buf) |
| Get converted/resampled data from the stream. More... | |
| int | SDL::AudioStream::GetAvailable () const |
| Get the number of converted/resampled bytes available. More... | |
| int | SDL::AudioStream::GetQueued () const |
| Get the number of bytes currently queued. More... | |
| void | SDL::AudioStream::Flush () |
| Tell the stream that you're done sending data, and anything being buffered should be converted/resampled and made available immediately. More... | |
| void | SDL::AudioStream::Clear () |
| Clear any pending data in the stream. More... | |
| void | SDL::AudioStream::PauseDevice () |
| Use this function to pause audio playback on the audio device associated with an audio stream. More... | |
| void | SDL::AudioStream::ResumeDevice () |
| Use this function to unpause audio playback on the audio device associated with an audio stream. More... | |
| bool | SDL::AudioStream::DevicePaused () const |
| Use this function to query if an audio device associated with a stream is paused. More... | |
| void | SDL::AudioStream::Lock () |
| Lock an audio stream for serialized access. More... | |
| void | SDL::AudioStream::Unlock () |
| Unlock an audio stream for serialized access. More... | |
| void | SDL::AudioStream::SetGetCallback (AudioStreamCallback callback, void *userdata) |
| Set a callback that runs when data is requested from an audio stream. More... | |
| void | SDL::AudioStream::SetGetCallback (AudioStreamCB callback) |
| Set a callback that runs when data is requested from an audio stream. More... | |
| void | SDL::AudioStream::SetPutCallback (AudioStreamCallback callback, void *userdata) |
| Set a callback that runs when data is added to an audio stream. More... | |
| void | SDL::AudioStream::SetPutCallback (AudioStreamCB callback) |
| Set a callback that runs when data is added to an audio stream. More... | |
| void | SDL::AudioStream::Destroy () |
| Free an audio stream. More... | |
| AudioStream | SDL::AudioDevice::OpenStream (OptionalRef< const AudioSpec > spec, AudioStreamCallback callback, void *userdata) |
| Convenience function for straightforward audio init for the common case. More... | |
| AudioStream | SDL::AudioDevice::OpenStream (OptionalRef< const AudioSpec > spec, AudioStreamCB callback) |
| Convenience function for straightforward audio init for the common case. More... | |
| SDL::AudioStream::AudioStream (AudioDeviceParam devid, OptionalRef< const AudioSpec > spec, AudioStreamCB callback) | |
| Convenience function for straightforward audio init for the common case. More... | |
| void | SDL::AudioDevice::SetPostmixCallback (AudioPostmixCallback callback, void *userdata) |
| Set a callback that fires when data is about to be fed to an audio device. More... | |
| void | SDL::AudioDevice::SetPostmixCallback (AudioPostmixCB callback) |
| Set a callback that fires when data is about to be fed to an audio device. More... | |
| const char * | SDL::AudioFormat::GetName () const |
| Get the human readable name of an audio format. More... | |
| int | SDL::AudioFormat::GetSilenceValue () const |
| Get the appropriate memset value for silencing an audio format. More... | |
Variables | |
| constexpr Uint32 | SDL::AUDIO_MASK_BITSIZE = SDL_AUDIO_MASK_BITSIZE |
| Mask of bits in an AudioFormat that contains the format bit size. More... | |
| constexpr Uint32 | SDL::AUDIO_MASK_FLOAT = SDL_AUDIO_MASK_FLOAT |
| Mask of bits in an AudioFormat that contain the floating point flag. More... | |
| constexpr Uint32 | SDL::AUDIO_MASK_BIG_ENDIAN = SDL_AUDIO_MASK_BIG_ENDIAN |
| Mask of bits in an AudioFormat that contain the bigendian flag. More... | |
| constexpr Uint32 | SDL::AUDIO_MASK_SIGNED = SDL_AUDIO_MASK_SIGNED |
| Mask of bits in an AudioFormat that contain the signed data flag. More... | |
| constexpr AudioFormat | SDL::AUDIO_UNKNOWN |
| Unspecified audio format. More... | |
| constexpr AudioFormat | SDL::AUDIO_U8 = SDL_AUDIO_U8 |
| Unsigned 8-bit samples. | |
| constexpr AudioFormat | SDL::AUDIO_S8 = SDL_AUDIO_S8 |
| Signed 8-bit samples. | |
| constexpr AudioFormat | SDL::AUDIO_S16LE = SDL_AUDIO_S16LE |
| Signed 16-bit samples. | |
| constexpr AudioFormat | SDL::AUDIO_S16BE |
| As above, but big-endian byte order. More... | |
| constexpr AudioFormat | SDL::AUDIO_S32LE = SDL_AUDIO_S32LE |
| 32-bit integer samples | |
| constexpr AudioFormat | SDL::AUDIO_S32BE |
| As above, but big-endian byte order. More... | |
| constexpr AudioFormat | SDL::AUDIO_F32LE |
| 32-bit floating point samples More... | |
| constexpr AudioFormat | SDL::AUDIO_F32BE |
| As above, but big-endian byte order. More... | |
| constexpr AudioFormat | SDL::AUDIO_S16 = SDL_AUDIO_S16 |
| AUDIO_S16. | |
| constexpr AudioFormat | SDL::AUDIO_S32 = SDL_AUDIO_S32 |
| AUDIO_S32. | |
| constexpr AudioFormat | SDL::AUDIO_F32 = SDL_AUDIO_F32 |
| AUDIO_F32. | |
| constexpr AudioDeviceID | SDL::AUDIO_DEVICE_DEFAULT_PLAYBACK |
| A value used to request a default playback audio device. More... | |
| constexpr AudioDeviceID | SDL::AUDIO_DEVICE_DEFAULT_RECORDING |
| A value used to request a default recording audio device. More... | |
All audio in SDL3 revolves around AudioStream. Whether you want to play or record audio, convert it, stream it, buffer it, or mix it, you're going to be passing it through an audio stream.
Audio streams are quite flexible; they can accept any amount of data at a time, in any supported format, and output it as needed in any other format, even if the data format changes on either side halfway through.
An app opens an audio device and binds any number of audio streams to it, feeding more data to the streams as available. When the device needs more data, it will pull it from all bound streams and mix them together for playback.
Audio streams can also use an app-provided callback to supply data on-demand, which maps pretty closely to the SDL2 audio model.
SDL also provides a simple .WAV loader in LoadWAV (and LoadWAV if you aren't reading from a file) as a basic means to load sound data into your program.
In SDL3, opening a physical device (like a SoundBlaster 16 Pro) gives you a logical device ID that you can bind audio streams to. In almost all cases, logical devices can be used anywhere in the API that a physical device is normally used. However, since each device opening generates a new logical device, different parts of the program (say, a VoIP library, or text-to-speech framework, or maybe some other sort of mixer on top of SDL) can have their own device opens that do not interfere with each other; each logical device will mix its separate audio down to a single buffer, fed to the physical device, behind the scenes. As many logical devices as you like can come and go; SDL will only have to open the physical device at the OS level once, and will manage all the logical devices on top of it internally.
One other benefit of logical devices: if you don't open a specific physical device, instead opting for the default, SDL can automatically migrate those logical devices to different hardware as circumstances change: a user plugged in headphones? The system default changed? SDL can transparently migrate the logical devices to the correct physical device seamlessly and keep playing; the app doesn't even have to know it happened if it doesn't want to.
As a simplified model for when a single source of audio is all that's needed, an app can use AudioStream.AudioStream, which is a single function to open an audio device, create an audio stream, bind that stream to the newly-opened device, and (optionally) provide a callback for obtaining audio data. When using this function, the primary interface is the AudioStream and the device handle is mostly hidden away; destroying a stream created through this function will also close the device, stream bindings cannot be changed, etc. One other quirk of this is that the device is started in a paused state and must be explicitly resumed; this is partially to offer a clean migration for SDL2 apps and partially because the app might have to do more setup before playback begins; in the non-simplified form, nothing will play until a stream is bound to a device, so they start unpaused.
Audio data passing through SDL is uncompressed PCM data, interleaved. One can provide their own decompression through an MP3, etc, decoder, but SDL does not provide this directly. Each interleaved channel of data is meant to be in a specific order.
Abbreviations:
These are listed in the order they are laid out in memory, so "FL, FR" means "the front left speaker is laid out in memory first, then the front right, then it repeats for the next audio frame".
This is the same order as DirectSound expects, but applied to all platforms; SDL will swizzle the channels as necessary if a platform expects something different.
AudioStream can also be provided channel maps to change this ordering to whatever is necessary, in other audio processing scenarios.
| using SDL::AudioPostmixCallback = typedef SDL_AudioPostmixCallback |
This is useful for accessing the final mix, perhaps for writing a visualizer or applying a final effect to the audio data before playback.
This callback should run as quickly as possible and not block for any significant time, as this callback delays submission of data to the audio device, which can cause audio playback problems.
The postmix callback must be able to handle any audio data format specified in spec, which can change between callbacks if the audio device changed. However, this only covers frequency and channel count; data is always provided here in AUDIO_F32 format.
The postmix callback runs after logical device gain and audiostream gain have been applied, which is to say you can make the output data louder at this point than the gain settings would suggest.
| userdata | a pointer provided by the app through AudioDevice.SetPostmixCallback, for its own use. |
| spec | the current format of audio that is to be submitted to the audio device. |
| buffer | the buffer of audio samples to be submitted. The callback can inspect and/or modify this data. |
| buflen | the size of buffer in bytes. |
| using SDL::AudioPostmixCB = typedef std::function<void(const AudioSpec& spec, std::span<float> buffer)> |
This is useful for accessing the final mix, perhaps for writing a visualizer or applying a final effect to the audio data before playback.
This callback should run as quickly as possible and not block for any significant time, as this callback delays submission of data to the audio device, which can cause audio playback problems.
The postmix callback must be able to handle any audio data format specified in spec, which can change between callbacks if the audio device changed. However, this only covers frequency and channel count; data is always provided here in AUDIO_F32 format.
The postmix callback runs after logical device gain and audiostream gain have been applied, which is to say you can make the output data louder at this point than the gain settings would suggest.
| spec | the current format of audio that is to be submitted to the audio device. |
| buffer | the buffer of audio samples to be submitted. The callback can inspect and/or modify this data. |
| using SDL::AudioSpec = typedef SDL_AudioSpec |
| using SDL::AudioStreamCallback = typedef SDL_AudioStreamCallback |
Apps can (optionally) register a callback with an audio stream that is called when data is added with AudioStream.PutData, or requested with AudioStream.GetData.
Two values are offered here: one is the amount of additional data needed to satisfy the immediate request (which might be zero if the stream already has enough data queued) and the other is the total amount being requested. In a Get call triggering a Put callback, these values can be different. In a Put call triggering a Get callback, these values are always the same.
Byte counts might be slightly overestimated due to buffering or resampling, and may change from call to call.
This callback is not required to do anything. Generally this is useful for adding/reading data on demand, and the app will often put/get data as appropriate, but the system goes on with the data currently available to it if this callback does nothing.
| stream | the SDL audio stream associated with this callback. |
| additional_amount | the amount of data, in bytes, that is needed right now. |
| total_amount | the total amount of data requested, in bytes, that is requested or available. |
| userdata | an opaque pointer provided by the app for their personal use. |
| using SDL::AudioStreamCB = typedef std::function< void(AudioStreamRef stream, int additional_amount, int total_amount)> |
Apps can (optionally) register a callback with an audio stream that is called when data is added with AudioStream.PutData, or requested with AudioStream.GetData.
Two values are offered here: one is the amount of additional data needed to satisfy the immediate request (which might be zero if the stream already has enough data queued) and the other is the total amount being requested. In a Get call triggering a Put callback, these values can be different. In a Put call triggering a Get callback, these values are always the same.
Byte counts might be slightly overestimated due to buffering or resampling, and may change from call to call.
This callback is not required to do anything. Generally this is useful for adding/reading data on demand, and the app will often put/get data as appropriate, but the system goes on with the data currently available to it if this callback does nothing.
| stream | the SDL audio stream associated with this callback. |
| additional_amount | the amount of data, in bytes, that is needed right now. |
| total_amount | the total amount of data requested, in bytes, that is requested or available. |
|
constexpr |
For example, AudioFormat.GetBitSize(AUDIO_S16) returns 16.
| x | an AudioFormat value. |
|
constexpr |
For example, AudioFormat.GetByteSize(AUDIO_S16) returns 2.
| x | an AudioFormat value. |
|
inline |
Unlike in SDL2, audio devices start in an unpaused state, since an app has to bind a stream before any audio will flow.
Physical devices can not be paused or unpaused, only logical devices created through AudioDevice.AudioDevice() can be. Physical and invalid device IDs will report themselves as unpaused here.
| devid | a device opened by AudioDevice.AudioDevice(). |
|
constexpr |
This reports on the size of an audio sample frame: stereo Sint16 data (2 channels of 2 bytes each) would be 4 bytes per frame, for example.
| x | an AudioSpec to query. |
|
inline |
If all your app intends to do is provide a single source of PCM audio, this function allows you to do all your audio setup in a single call.
This is also intended to be a clean means to migrate apps from SDL2.
This function will open an audio device, create a stream and bind it. Unlike other methods of setup, the audio device will be closed when this stream is destroyed, so the app can treat the returned AudioStream as the only object needed to manage audio playback.
Also unlike other functions, the audio device begins paused. This is to map more closely to SDL2-style behavior, since there is no extra step here to bind a stream to begin audio flowing. The audio device should be resumed with AudioStream.ResumeDevice(stream);
This function works with both playback and recording devices.
The spec parameter represents the app's side of the audio stream. That is, for recording audio, this will be the output format, and for playing audio, this will be the input format. If spec is nullptr, the system will choose the format, and the app can use AudioStream.GetFormat() to obtain this information later.
If you don't care about opening a specific audio device, you can (and probably should), use AUDIO_DEVICE_DEFAULT_PLAYBACK for playback and AUDIO_DEVICE_DEFAULT_RECORDING for recording.
One can optionally provide a callback function; if nullptr, the app is expected to queue audio data for playback (or unqueue audio data if capturing). Otherwise, the callback will begin to fire once the device is unpaused.
Destroying the returned stream with AudioStream.Destroy will also close the audio device associated with this stream.
| devid | an audio device to open, or AUDIO_DEVICE_DEFAULT_PLAYBACK or AUDIO_DEVICE_DEFAULT_RECORDING. |
| spec | the audio stream's data format. Can be nullptr. |
| callback | a callback where the app will provide new data for playback, or receive new data for recording. |
| Error | on failure. |
|
inline |
Unlike in SDL2, audio devices start in an unpaused state, since an app has to bind a stream before any audio will flow.
| stream | the audio stream associated with the audio device to query. |
|
inline |
This is a convenience function, equivalent to calling AudioDevice.BindAudioStreams(devid, &stream, 1).
| devid | an audio device to bind a stream to. |
| stream | an audio stream to bind to a device. |
| Error | on failure. |
|
inline |
This is a convenience function, equivalent to calling AudioDevice.BindAudioStreams(devid, &stream, 1).
| stream | an audio stream to bind to a device. |
| Error | on failure. |
|
inline |
Audio data will flow through any bound streams. For a playback device, data for all bound streams will be mixed together and fed to the device. For a recording device, a copy of recorded data will be provided to each bound stream.
Audio streams can only be bound to an open device. This operation is atomic–all streams bound in the same call will start processing at the same time, so they can stay in sync. Also: either all streams will be bound or none of them will be.
It is an error to bind an already-bound stream; it must be explicitly unbound first.
Binding a stream to a device will set its output format for playback devices, and its input format for recording devices, so they match the device's settings. The caller is welcome to change the other end of the stream's format at any time with AudioStream.SetFormat(). If the other end of the stream's format has never been set (the audio stream was created with a nullptr audio spec), this function will set it to match the device end's format.
| devid | an audio device to bind a stream to. |
| streams | an array of audio streams to bind. |
| Error | on failure. |
|
inline |
Audio data will flow through any bound streams. For a playback device, data for all bound streams will be mixed together and fed to the device. For a recording device, a copy of recorded data will be provided to each bound stream.
Audio streams can only be bound to an open device. This operation is atomic–all streams bound in the same call will start processing at the same time, so they can stay in sync. Also: either all streams will be bound or none of them will be.
It is an error to bind an already-bound stream; it must be explicitly unbound first.
Binding a stream to a device will set its output format for playback devices, and its input format for recording devices, so they match the device's settings. The caller is welcome to change the other end of the stream's format at any time with AudioStream.SetFormat(). If the other end of the stream's format has never been set (the audio stream was created with a nullptr audio spec), this function will set it to match the device end's format.
| streams | an array of audio streams to bind. |
| Error | on failure. |
|
inline |
|
inline |
|
inline |
The application should close open audio devices once they are no longer needed.
This function may block briefly while pending audio data is played by the hardware, so that applications don't drop the last buffer of data they supplied if terminating immediately afterwards.
|
inline |
The application should close open audio devices once they are no longer needed.
This function may block briefly while pending audio data is played by the hardware, so that applications don't drop the last buffer of data they supplied if terminating immediately afterwards.
| devid | an audio device id previously returned by AudioDevice.AudioDevice(). |
|
inline |
Please note that this function is for convenience, but should not be used to resample audio in blocks, as it will introduce audio artifacts on the boundaries. You should only use this function if you are converting audio data in its entirety in one call. If you want to convert audio in smaller chunks, use an AudioStream, which is designed for this situation.
Internally, this function creates and destroys an AudioStream on each use, so it's also less efficient than using one directly, if you need to convert multiple times.
| src_spec | the format details of the input audio. |
| src_data | the audio data to be converted. |
| dst_spec | the format details of the output audio. |
| Error | on failure. |
|
inline |
|
constexpr |
SDL does not support custom audio formats, so this macro is not of much use externally, but it can be illustrative as to what the various bits of an AudioFormat mean.
For example, AUDIO_S32LE looks like this:
| sign | 1 for signed data, 0 for unsigned data. |
| bigendian | 1 for bigendian data, 0 for littleendian data. |
| flt | 1 for floating point data, 0 for integer data. |
| size | number of bits per sample. |
|
inline |
This will release all allocated data, including any audio that is still queued. You do not need to manually clear the stream first.
If this stream was bound to an audio device, it is unbound during this call. If this stream was created with AudioStream.AudioStream, the audio device that was opened alongside this stream's creation will be closed, too.
|
inline |
This will release all allocated data, including any audio that is still queued. You do not need to manually clear the stream first.
If this stream was bound to an audio device, it is unbound during this call. If this stream was created with AudioStream.AudioStream, the audio device that was opened alongside this stream's creation will be closed, too.
| stream | the audio stream to destroy. |
|
inline |
Unlike in SDL2, audio devices start in an unpaused state, since an app has to bind a stream before any audio will flow.
|
inline |
It is legal to add more data to a stream after flushing, but there may be audio gaps in the output. Generally this is intended to signal the end of input, so the complete output becomes available.
| Error | on failure. |
|
inline |
It is legal to add more data to a stream after flushing, but there may be audio gaps in the output. Generally this is intended to signal the end of input, so the complete output becomes available.
| stream | the audio stream to flush. |
| Error | on failure. |
|
inline |
Channel maps are optional; most things do not need them, instead passing data in the order that SDL expects.
Audio devices usually have no remapping applied. This is represented by returning nullptr, and does not signify an error.
| devid | the instance ID of the device to query. |
|
inline |
For an opened device, this will report the format the device is currently using. If the device isn't yet opened, this will report the device's preferred format (or a reasonable default if this can't be determined).
You may also specify AUDIO_DEVICE_DEFAULT_PLAYBACK or AUDIO_DEVICE_DEFAULT_RECORDING here, which is useful for getting a reasonable recommendation before opening the system-recommended default device.
You can also use this to request the current device buffer size. This is specified in sample frames and represents the amount of data SDL will feed to the physical hardware in each chunk. This can be converted to milliseconds of audio with the following equation:
ms = (int) ((((Sint64) frames) * 1000) / spec.freq);
Buffer size is only important if you need low-level control over the audio playback timing. Most apps do not need this.
| devid | the instance ID of the device to query. |
| sample_frames | pointer to store device buffer size, in sample frames. Can be nullptr. |
| Error | on failure. |
|
inline |
The gain of a device is its volume; a larger gain means a louder output, with a gain of zero being silence.
Audio devices default to a gain of 1.0f (no change in output).
Physical devices may not have their gain changed, only logical devices, and this function will always return -1.0f when used on physical devices.
| devid | the audio device to query. |
|
inline |
|
inline |
The list of audio drivers is given in the order that they are normally initialized by default; the drivers that seem more reasonable to choose first (as far as the SDL developers believe) are earlier in the list.
The names of drivers are all simple, low-ASCII identifiers, like "alsa", "coreaudio" or "wasapi". These never have Unicode characters, and are not meant to be proper names.
| index | the index of the audio driver; the value ranges from 0 to GetNumAudioDrivers() - 1. |
|
inline |
| format | the audio format to query. |
|
inline |
This returns of list of available devices that play sound, perhaps to speakers or headphones ("playback" devices). If you want devices that record audio, like a microphone ("recording" devices), use GetAudioRecordingDevices() instead.
This only returns a list of physical devices; it will not have any device IDs returned by AudioDevice.AudioDevice().
If this function returns nullptr, to signify an error, *count will be set to zero.
| Error | on failure. |
|
inline |
This returns of list of available devices that record audio, like a microphone ("recording" devices). If you want devices that play sound, perhaps to speakers or headphones ("playback" devices), use GetAudioPlaybackDevices() instead.
This only returns a list of physical devices; it will not have any device IDs returned by AudioDevice.AudioDevice().
If this function returns nullptr, to signify an error, *count will be set to zero.
| Error | on failure. |
|
inline |
The stream may be buffering data behind the scenes until it has enough to resample correctly, so this number might be lower than what you expect, or even be zero. Add more data or flush the stream if you need the data now.
If the stream has so much data that it would overflow an int, the return value is clamped to a maximum value, but no queued data is lost; if there are gigabytes of data queued, the app might need to read some of it with AudioStream.GetData before this function's return value is no longer clamped.
| stream | the audio stream to query. |
|
inline |
The input/output data format/channels/samplerate is specified when creating the stream, and can be changed after creation by calling AudioStream.SetFormat.
Note that any conversion and resampling necessary is done during this call, and AudioStream.PutData simply queues unconverted data for later. This is different than SDL2, where that work was done while inputting new data to the stream and requesting the output just copied the converted data.
| stream | the stream the audio is being requested from. |
| buf | a buffer to fill with audio data. |
|
inline |
This reports the logical audio device that an audio stream is currently bound to.
If not bound, or invalid, this returns zero, which is not a valid device ID.
| stream | the audio stream to query. |
|
inline |
| stream | the AudioStream to query. |
| src_spec | where to store the input audio format; ignored if nullptr. |
| dst_spec | where to store the output audio format; ignored if nullptr. |
| Error | on failure. |
|
inline |
| stream | the AudioStream to query. |
|
inline |
The gain of a stream is its volume; a larger gain means a louder output, with a gain of zero being silence.
Audio streams default to a gain of 1.0f (no change in output).
| stream | the AudioStream to query. |
|
inline |
Channel maps are optional; most things do not need them, instead passing data in the order that SDL expects.
Audio streams default to no remapping applied. This is represented by returning nullptr, and does not signify an error.
| stream | the AudioStream to query. |
|
inline |
Channel maps are optional; most things do not need them, instead passing data in the order that SDL expects.
Audio streams default to no remapping applied. This is represented by returning nullptr, and does not signify an error.
| stream | the AudioStream to query. |
|
inline |
| stream | the AudioStream to query. |
| Error | on failure. |
|
inline |
This is the number of bytes put into a stream as input, not the number that can be retrieved as output. Because of several details, it's not possible to calculate one number directly from the other. If you need to know how much usable data can be retrieved right now, you should use AudioStream.GetAvailable() and not this function.
Note that audio streams can change their input format at any time, even if there is still data queued in a different format, so the returned byte count will not necessarily match the number of sample frames available. Users of this API should be aware of format changes they make when feeding a stream and plan accordingly.
Queued data is not converted until it is consumed by AudioStream.GetData, so this value should be representative of the exact data that was put into the stream.
If the stream has so much data that it would overflow an int, the return value is clamped to a maximum value, but no queued data is lost; if there are gigabytes of data queued, the app might need to read some of it with AudioStream.GetData before this function's return value is no longer clamped.
| stream | the audio stream to query. |
|
inline |
The stream may be buffering data behind the scenes until it has enough to resample correctly, so this number might be lower than what you expect, or even be zero. Add more data or flush the stream if you need the data now.
If the stream has so much data that it would overflow an int, the return value is clamped to a maximum value, but no queued data is lost; if there are gigabytes of data queued, the app might need to read some of it with AudioStream.GetData before this function's return value is no longer clamped.
|
constexpr |
For example, AudioFormat.GetBitSize(AUDIO_S16) returns 16.
|
constexpr |
For example, AudioFormat.GetByteSize(AUDIO_S16) returns 2.
|
inline |
Channel maps are optional; most things do not need them, instead passing data in the order that SDL expects.
Audio devices usually have no remapping applied. This is represented by returning nullptr, and does not signify an error.
|
inline |
The names of drivers are all simple, low-ASCII identifiers, like "alsa", "coreaudio" or "wasapi". These never have Unicode characters, and are not meant to be proper names.
|
inline |
The input/output data format/channels/samplerate is specified when creating the stream, and can be changed after creation by calling AudioStream.SetFormat.
Note that any conversion and resampling necessary is done during this call, and AudioStream.PutData simply queues unconverted data for later. This is different than SDL2, where that work was done while inputting new data to the stream and requesting the output just copied the converted data.
| buf | a buffer to fill with audio data. |
|
inline |
This reports the logical audio device that an audio stream is currently bound to.
If not bound, or invalid, this returns zero, which is not a valid device ID.
| src_spec | where to store the input audio format; ignored if nullptr. |
| dst_spec | where to store the output audio format; ignored if nullptr. |
| Error | on failure. |
|
inline |
For an opened device, this will report the format the device is currently using. If the device isn't yet opened, this will report the device's preferred format (or a reasonable default if this can't be determined).
You may also specify AUDIO_DEVICE_DEFAULT_PLAYBACK or AUDIO_DEVICE_DEFAULT_RECORDING here, which is useful for getting a reasonable recommendation before opening the system-recommended default device.
You can also use this to request the current device buffer size. This is specified in sample frames and represents the amount of data SDL will feed to the physical hardware in each chunk. This can be converted to milliseconds of audio with the following equation:
ms = (int) ((((Sint64) frames) * 1000) / spec.freq);
Buffer size is only important if you need low-level control over the audio playback timing. Most apps do not need this.
| sample_frames | pointer to store device buffer size, in sample frames. Can be nullptr. |
| Error | on failure. |
|
inline |
|
inline |
The gain of a device is its volume; a larger gain means a louder output, with a gain of zero being silence.
Audio devices default to a gain of 1.0f (no change in output).
Physical devices may not have their gain changed, only logical devices, and this function will always return -1.0f when used on physical devices.
|
inline |
The gain of a stream is its volume; a larger gain means a louder output, with a gain of zero being silence.
Audio streams default to a gain of 1.0f (no change in output).
|
inline |
Channel maps are optional; most things do not need them, instead passing data in the order that SDL expects.
Audio streams default to no remapping applied. This is represented by returning nullptr, and does not signify an error.
|
inline |
|
inline |
|
inline |
This function returns a hardcoded number. This never returns a negative value; if there are no drivers compiled into this build of SDL, this function returns zero. The presence of a driver in this list does not mean it will function, it just means SDL is capable of interacting with that interface. For example, a build of SDL might have esound support, but if there's no esound server available, SDL's esound driver would fail if used.
By default, SDL tries all drivers, in its preferred order, until one is found to be usable.
|
inline |
Channel maps are optional; most things do not need them, instead passing data in the order that SDL expects.
Audio streams default to no remapping applied. This is represented by returning nullptr, and does not signify an error.
|
inline |
|
inline |
This is the number of bytes put into a stream as input, not the number that can be retrieved as output. Because of several details, it's not possible to calculate one number directly from the other. If you need to know how much usable data can be retrieved right now, you should use AudioStream.GetAvailable() and not this function.
Note that audio streams can change their input format at any time, even if there is still data queued in a different format, so the returned byte count will not necessarily match the number of sample frames available. Users of this API should be aware of format changes they make when feeding a stream and plan accordingly.
Queued data is not converted until it is consumed by AudioStream.GetData, so this value should be representative of the exact data that was put into the stream.
If the stream has so much data that it would overflow an int, the return value is clamped to a maximum value, but no queued data is lost; if there are gigabytes of data queued, the app might need to read some of it with AudioStream.GetData before this function's return value is no longer clamped.
|
inline |
The value returned by this function can be used as the second argument to memset (or memset) to set an audio buffer in a specific format to silence.
|
inline |
The value returned by this function can be used as the second argument to memset (or memset) to set an audio buffer in a specific format to silence.
| format | the audio data format to query. |
|
constexpr |
For example, AudioFormat.IsBigEndian(AUDIO_S16LE) returns 0.
| x | an AudioFormat value. |
|
inline |
An AudioDevice that represents physical hardware is a physical device; there is one for each piece of hardware that SDL can see. Logical devices are created by calling AudioDevice.AudioDevice or AudioStream.AudioStream, and while each is associated with a physical device, there can be any number of logical devices on one physical device.
For the most part, logical and physical IDs are interchangeable–if you try to open a logical device, SDL understands to assign that effort to the underlying physical device, etc. However, it might be useful to know if an arbitrary device ID is physical or logical. This function reports which.
This function may return either true or false for invalid device IDs.
| devid | the device ID to query. |
|
inline |
This function may return either true or false for invalid device IDs.
| devid | the device ID to query. |
|
constexpr |
For example, AudioFormat.IsFloat(AUDIO_S16) returns 0.
| x | an AudioFormat value. |
|
constexpr |
For example, AudioFormat.IsInt(AUDIO_F32) returns 0.
| x | an AudioFormat value. |
|
constexpr |
For example, AudioFormat.IsLittleEndian(AUDIO_S16BE) returns 0.
| x | an AudioFormat value. |
|
constexpr |
For example, AudioFormat.IsSigned(AUDIO_U8) returns 0.
| x | an AudioFormat value. |
|
constexpr |
For example, AudioFormat.IsUnsigned(AUDIO_S16) returns 0.
| x | an AudioFormat value. |
|
constexpr |
For example, AudioFormat.IsBigEndian(AUDIO_S16LE) returns 0.
|
constexpr |
For example, AudioFormat.IsFloat(AUDIO_S16) returns 0.
|
constexpr |
For example, AudioFormat.IsInt(AUDIO_F32) returns 0.
|
constexpr |
For example, AudioFormat.IsLittleEndian(AUDIO_S16BE) returns 0.
|
inline |
An AudioDevice that represents physical hardware is a physical device; there is one for each piece of hardware that SDL can see. Logical devices are created by calling AudioDevice.AudioDevice or AudioStream.AudioStream, and while each is associated with a physical device, there can be any number of logical devices on one physical device.
For the most part, logical and physical IDs are interchangeable–if you try to open a logical device, SDL understands to assign that effort to the underlying physical device, etc. However, it might be useful to know if an arbitrary device ID is physical or logical. This function reports which.
This function may return either true or false for invalid device IDs.
|
inline |
This function may return either true or false for invalid device IDs.
|
constexpr |
For example, AudioFormat.IsSigned(AUDIO_U8) returns 0.
|
constexpr |
For example, AudioFormat.IsUnsigned(AUDIO_S16) returns 0.
|
inline |
Loading a WAVE file requires src, spec, audio_buf and audio_len to be valid pointers. The entire data portion of the file is then loaded into memory and decoded if necessary.
Supported formats are RIFF WAVE files with the formats PCM (8, 16, 24, and 32 bits), IEEE Float (32 bits), Microsoft ADPCM and IMA ADPCM (4 bits), and A-law and mu-law (8 bits). Other formats are currently unsupported and cause an error.
If this function succeeds, the return value is zero and the pointer to the audio data allocated by the function is written to audio_buf and its length in bytes to audio_len. The AudioSpec members freq, channels, and format are set to the values of the audio data in the buffer.
It's necessary to use free() to free the audio data returned in audio_buf when it is no longer used.
Because of the underspecification of the .WAV format, there are many problematic files in the wild that cause issues with strict decoders. To provide compatibility with these files, this decoder is lenient in regards to the truncation of the file, the fact chunk, and the size of the RIFF chunk. The hints SDL_HINT_WAVE_RIFF_CHUNK_SIZE, SDL_HINT_WAVE_TRUNCATION, and SDL_HINT_WAVE_FACT_CHUNK can be used to tune the behavior of the loading process.
Any file that is invalid (due to truncation, corruption, or wrong values in the headers), too big, or unsupported causes an error. Additionally, any critical I/O error from the data source will terminate the loading process with an error. The function returns nullptr on error and in all cases (with the exception of src being nullptr), an appropriate error message will be set.
It is required that the data source supports seeking.
Example:
Note that the LoadWAV function does this same thing for you, but in a less messy way:
| src | the data source for the WAVE data. |
| spec | a pointer to an AudioSpec that will be set to the WAVE data's format details on successful return. |
| closeio | if true, calls IOStream.Close() on src before returning, even in the case of an error. |
| Error | on failure. |
This function throws if the .WAV file cannot be opened, uses an unknown data format, or is corrupt; call GetError() for more information.
|
inline |
This is a convenience function that is effectively the same as:
| path | the file path of the WAV file to open. |
| spec | a pointer to an AudioSpec that will be set to the WAVE data's format details on successful return. |
| Error | on failure. |
This function throws if the .WAV file cannot be opened, uses an unknown data format, or is corrupt,
|
inline |
Each AudioStream has an internal mutex it uses to protect its data structures from threading conflicts. This function allows an app to lock that mutex, which could be useful if registering callbacks on this stream.
One does not need to lock a stream to use in it most cases, as the stream manages this lock internally. However, this lock is held during callbacks, which may run from arbitrary threads at any time, so if an app needs to protect shared data during those callbacks, locking the stream guarantees that the callback is not running while the lock is held.
As this is just a wrapper over Mutex.Lock for an internal lock; it has all the same attributes (recursive locks are allowed, etc).
| Error | on failure. |
|
inline |
Each AudioStream has an internal mutex it uses to protect its data structures from threading conflicts. This function allows an app to lock that mutex, which could be useful if registering callbacks on this stream.
One does not need to lock a stream to use in it most cases, as the stream manages this lock internally. However, this lock is held during callbacks, which may run from arbitrary threads at any time, so if an app needs to protect shared data during those callbacks, locking the stream guarantees that the callback is not running while the lock is held.
As this is just a wrapper over Mutex.Lock for an internal lock; it has all the same attributes (recursive locks are allowed, etc).
| stream | the audio stream to lock. |
| Error | on failure. |
|
inline |
This takes an audio buffer src of len bytes of format data and mixes it into dst, performing addition, volume adjustment, and overflow clipping. The buffer pointed to by dst must also be len bytes of format data.
This is provided for convenience – you can mix your own audio data.
Do not use this function for mixing together more than two streams of sample data. The output from repeated application of this function may be distorted by clipping, because there is no accumulator with greater range than the input (not to mention this being an inefficient way of doing it).
It is a common misconception that this function is required to write audio data to an output stream in an audio callback. While you can do that, MixAudio() is really only needed when you're mixing a single audio stream with a volume adjustment.
| dst | the destination for the mixed audio. |
| src | the source audio buffer to be mixed. |
| format | the AudioFormat structure representing the desired audio format. |
| volume | ranges from 0.0 - 1.0, and should be set to 1.0 for full audio volume. |
| Error | on failure. |
|
inline |
This takes an audio buffer src of len bytes of format data and mixes it into dst, performing addition, volume adjustment, and overflow clipping. The buffer pointed to by dst must also be len bytes of format data.
This is provided for convenience – you can mix your own audio data.
Do not use this function for mixing together more than two streams of sample data. The output from repeated application of this function may be distorted by clipping, because there is no accumulator with greater range than the input (not to mention this being an inefficient way of doing it).
It is a common misconception that this function is required to write audio data to an output stream in an audio callback. While you can do that, MixAudio() is really only needed when you're mixing a single audio stream with a volume adjustment.
| dst | the destination for the mixed audio. |
| src | the source audio buffer to be mixed. |
| format | the AudioFormat structure representing the desired audio format. |
| volume | ranges from 0.0 - 1.0, and should be set to 1.0 for full audio volume. |
| Error | on failure. |
|
inline |
You can open both playback and recording devices through this function. Playback devices will take data from bound audio streams, mix it, and send it to the hardware. Recording devices will feed any bound audio streams with a copy of any incoming data.
An opened audio device starts out with no audio streams bound. To start audio playing, bind a stream and supply audio data to it. Unlike SDL2, there is no audio callback; you only bind audio streams and make sure they have data flowing into them (however, you can simulate SDL2's semantics fairly closely by using AudioStream.AudioStream instead of this function).
If you don't care about opening a specific device, pass a devid of either AUDIO_DEVICE_DEFAULT_PLAYBACK or AUDIO_DEVICE_DEFAULT_RECORDING. In this case, SDL will try to pick the most reasonable default, and may also switch between physical devices seamlessly later, if the most reasonable default changes during the lifetime of this opened device (user changed the default in the OS's system preferences, the default got unplugged so the system jumped to a new default, the user plugged in headphones on a mobile device, etc). Unless you have a good reason to choose a specific device, this is probably what you want.
You may request a specific format for the audio device, but there is no promise the device will honor that request for several reasons. As such, it's only meant to be a hint as to what data your app will provide. Audio streams will accept data in whatever format you specify and manage conversion for you as appropriate. AudioDevice.GetFormat can tell you the preferred format for the device before opening and the actual format the device is using after opening.
It's legal to open the same device ID more than once; each successful open will generate a new logical AudioDevice that is managed separately from others on the same physical device. This allows libraries to open a device separately from the main app and bind its own streams without conflicting.
It is also legal to open a device ID returned by a previous call to this function; doing so just creates another logical device on the same physical device. This may be useful for making logical groupings of audio streams.
This function returns the opened device ID on success. This is a new, unique AudioDevice that represents a logical device.
Some backends might offer arbitrary devices (for example, a networked audio protocol that can connect to an arbitrary server). For these, as a change from SDL2, you should open a default device ID and use an SDL hint to specify the target if you care, or otherwise let the backend figure out a reasonable default. Most backends don't offer anything like this, and often this would be an end user setting an environment variable for their custom need, and not something an application should specifically manage.
When done with an audio device, possibly at the end of the app's life, one should call AudioDevice.Close() on the returned device id.
| devid | the device instance id to open, or AUDIO_DEVICE_DEFAULT_PLAYBACK or AUDIO_DEVICE_DEFAULT_RECORDING for the most reasonable default device. |
| spec | the requested device configuration. Can be nullptr to use reasonable defaults. |
| Error | on failure. |
|
inline |
If all your app intends to do is provide a single source of PCM audio, this function allows you to do all your audio setup in a single call.
This is also intended to be a clean means to migrate apps from SDL2.
This function will open an audio device, create a stream and bind it. Unlike other methods of setup, the audio device will be closed when this stream is destroyed, so the app can treat the returned AudioStream as the only object needed to manage audio playback.
Also unlike other functions, the audio device begins paused. This is to map more closely to SDL2-style behavior, since there is no extra step here to bind a stream to begin audio flowing. The audio device should be resumed with AudioStream.ResumeDevice(stream);
This function works with both playback and recording devices.
The spec parameter represents the app's side of the audio stream. That is, for recording audio, this will be the output format, and for playing audio, this will be the input format. If spec is nullptr, the system will choose the format, and the app can use AudioStream.GetFormat() to obtain this information later.
If you don't care about opening a specific audio device, you can (and probably should), use AUDIO_DEVICE_DEFAULT_PLAYBACK for playback and AUDIO_DEVICE_DEFAULT_RECORDING for recording.
One can optionally provide a callback function; if nullptr, the app is expected to queue audio data for playback (or unqueue audio data if capturing). Otherwise, the callback will begin to fire once the device is unpaused.
Destroying the returned stream with AudioStream.Destroy will also close the audio device associated with this stream.
| devid | an audio device to open, or AUDIO_DEVICE_DEFAULT_PLAYBACK or AUDIO_DEVICE_DEFAULT_RECORDING. |
| spec | the audio stream's data format. Can be nullptr. |
| callback | a callback where the app will provide new data for playback, or receive new data for recording. Can be nullptr, in which case the app will need to call AudioStream.PutData or AudioStream.GetData as necessary. |
| userdata | app-controlled pointer passed to callback. Can be nullptr. Ignored if callback is nullptr. |
| Error | on failure. |
|
inline |
If all your app intends to do is provide a single source of PCM audio, this function allows you to do all your audio setup in a single call.
This is also intended to be a clean means to migrate apps from SDL2.
This function will open an audio device, create a stream and bind it. Unlike other methods of setup, the audio device will be closed when this stream is destroyed, so the app can treat the returned AudioStream as the only object needed to manage audio playback.
Also unlike other functions, the audio device begins paused. This is to map more closely to SDL2-style behavior, since there is no extra step here to bind a stream to begin audio flowing. The audio device should be resumed with AudioStream.ResumeDevice(stream);
This function works with both playback and recording devices.
The spec parameter represents the app's side of the audio stream. That is, for recording audio, this will be the output format, and for playing audio, this will be the input format. If spec is nullptr, the system will choose the format, and the app can use AudioStream.GetFormat() to obtain this information later.
If you don't care about opening a specific audio device, you can (and probably should), use AUDIO_DEVICE_DEFAULT_PLAYBACK for playback and AUDIO_DEVICE_DEFAULT_RECORDING for recording.
One can optionally provide a callback function; if nullptr, the app is expected to queue audio data for playback (or unqueue audio data if capturing). Otherwise, the callback will begin to fire once the device is unpaused.
Destroying the returned stream with AudioStream.Destroy will also close the audio device associated with this stream.
| devid | an audio device to open, or AUDIO_DEVICE_DEFAULT_PLAYBACK or AUDIO_DEVICE_DEFAULT_RECORDING. |
| spec | the audio stream's data format. Can be nullptr. |
| callback | a callback where the app will provide new data for playback, or receive new data for recording. |
| Error | on failure. |
|
inline |
If all your app intends to do is provide a single source of PCM audio, this function allows you to do all your audio setup in a single call.
This is also intended to be a clean means to migrate apps from SDL2.
This function will open an audio device, create a stream and bind it. Unlike other methods of setup, the audio device will be closed when this stream is destroyed, so the app can treat the returned AudioStream as the only object needed to manage audio playback.
Also unlike other functions, the audio device begins paused. This is to map more closely to SDL2-style behavior, since there is no extra step here to bind a stream to begin audio flowing. The audio device should be resumed with AudioStream.ResumeDevice(stream);
This function works with both playback and recording devices.
The spec parameter represents the app's side of the audio stream. That is, for recording audio, this will be the output format, and for playing audio, this will be the input format. If spec is nullptr, the system will choose the format, and the app can use AudioStream.GetFormat() to obtain this information later.
If you don't care about opening a specific audio device, you can (and probably should), use AUDIO_DEVICE_DEFAULT_PLAYBACK for playback and AUDIO_DEVICE_DEFAULT_RECORDING for recording.
One can optionally provide a callback function; if nullptr, the app is expected to queue audio data for playback (or unqueue audio data if capturing). Otherwise, the callback will begin to fire once the device is unpaused.
Destroying the returned stream with AudioStream.Destroy will also close the audio device associated with this stream.
| spec | the audio stream's data format. Can be nullptr. |
| callback | a callback where the app will provide new data for playback, or receive new data for recording. Can be nullptr, in which case the app will need to call AudioStream.PutData or AudioStream.GetData as necessary. |
| userdata | app-controlled pointer passed to callback. Can be nullptr. Ignored if callback is nullptr. |
| Error | on failure. |
|
inline |
If all your app intends to do is provide a single source of PCM audio, this function allows you to do all your audio setup in a single call.
This is also intended to be a clean means to migrate apps from SDL2.
This function will open an audio device, create a stream and bind it. Unlike other methods of setup, the audio device will be closed when this stream is destroyed, so the app can treat the returned AudioStream as the only object needed to manage audio playback.
Also unlike other functions, the audio device begins paused. This is to map more closely to SDL2-style behavior, since there is no extra step here to bind a stream to begin audio flowing. The audio device should be resumed with AudioStream.ResumeDevice(stream);
This function works with both playback and recording devices.
The spec parameter represents the app's side of the audio stream. That is, for recording audio, this will be the output format, and for playing audio, this will be the input format. If spec is nullptr, the system will choose the format, and the app can use AudioStream.GetFormat() to obtain this information later.
If you don't care about opening a specific audio device, you can (and probably should), use AUDIO_DEVICE_DEFAULT_PLAYBACK for playback and AUDIO_DEVICE_DEFAULT_RECORDING for recording.
One can optionally provide a callback function; if nullptr, the app is expected to queue audio data for playback (or unqueue audio data if capturing). Otherwise, the callback will begin to fire once the device is unpaused.
Destroying the returned stream with AudioStream.Destroy will also close the audio device associated with this stream.
| spec | the audio stream's data format. Can be nullptr. |
| callback | a callback where the app will provide new data for playback, or receive new data for recording. |
| Error | on failure. |
|
inline |
This function pauses audio processing for a given device. Any bound audio streams will not progress, and no audio will be generated. Pausing one device does not prevent other unpaused devices from running.
Unlike in SDL2, audio devices start in an unpaused state, since an app has to bind a stream before any audio will flow. Pausing a paused device is a legal no-op.
Pausing a device can be useful to halt all audio without unbinding all the audio streams. This might be useful while a game is paused, or a level is loading, etc.
Physical devices can not be paused or unpaused, only logical devices created through AudioDevice.AudioDevice() can be.
| Error | on failure. |
|
inline |
This function pauses audio processing for a given device. Any bound audio streams will not progress, and no audio will be generated. Pausing one device does not prevent other unpaused devices from running.
Unlike in SDL2, audio devices start in an unpaused state, since an app has to bind a stream before any audio will flow. Pausing a paused device is a legal no-op.
Pausing a device can be useful to halt all audio without unbinding all the audio streams. This might be useful while a game is paused, or a level is loading, etc.
Physical devices can not be paused or unpaused, only logical devices created through AudioDevice.AudioDevice() can be.
| devid | a device opened by AudioDevice.AudioDevice(). |
| Error | on failure. |
|
inline |
This function pauses audio processing for a given device. Any bound audio streams will not progress, and no audio will be generated. Pausing one device does not prevent other unpaused devices from running.
Pausing a device can be useful to halt all audio without unbinding all the audio streams. This might be useful while a game is paused, or a level is loading, etc.
| stream | the audio stream associated with the audio device to pause. |
| Error | on failure. |
|
inline |
Unlike in SDL2, audio devices start in an unpaused state, since an app has to bind a stream before any audio will flow.
Physical devices can not be paused or unpaused, only logical devices created through AudioDevice.AudioDevice() can be. Physical and invalid device IDs will report themselves as unpaused here.
|
inline |
This function pauses audio processing for a given device. Any bound audio streams will not progress, and no audio will be generated. Pausing one device does not prevent other unpaused devices from running.
Pausing a device can be useful to halt all audio without unbinding all the audio streams. This might be useful while a game is paused, or a level is loading, etc.
| Error | on failure. |
|
inline |
This data must match the format/channels/samplerate specified in the latest call to AudioStream.SetFormat, or the format specified when creating the stream if it hasn't been changed.
Note that this call simply copies the unconverted data for later. This is different than SDL2, where data was converted during the Put call and the Get call would just dequeue the previously-converted data.
| stream | the stream the audio data is being added to. |
| buf | a pointer to the audio data to add. |
| Error | on failure. |
|
inline |
This data must match the format/channels/samplerate specified in the latest call to AudioStream.SetFormat, or the format specified when creating the stream if it hasn't been changed.
Note that this call simply copies the unconverted data for later. This is different than SDL2, where data was converted during the Put call and the Get call would just dequeue the previously-converted data.
| buf | a pointer to the audio data to add. |
| Error | on failure. |
|
inline |
This function unpauses audio processing for a given device that has previously been paused with AudioDevice.Pause(). Once unpaused, any bound audio streams will begin to progress again, and audio can be generated.
Unlike in SDL2, audio devices start in an unpaused state, since an app has to bind a stream before any audio will flow. Unpausing an unpaused device is a legal no-op.
Physical devices can not be paused or unpaused, only logical devices created through AudioDevice.AudioDevice() can be.
| Error | on failure. |
|
inline |
This function unpauses audio processing for a given device that has previously been paused with AudioDevice.Pause(). Once unpaused, any bound audio streams will begin to progress again, and audio can be generated.
Unlike in SDL2, audio devices start in an unpaused state, since an app has to bind a stream before any audio will flow. Unpausing an unpaused device is a legal no-op.
Physical devices can not be paused or unpaused, only logical devices created through AudioDevice.AudioDevice() can be.
| devid | a device opened by AudioDevice.AudioDevice(). |
| Error | on failure. |
|
inline |
This function unpauses audio processing for a given device that has previously been paused. Once unpaused, any bound audio streams will begin to progress again, and audio can be generated.
Remember, AudioStream.AudioStream opens device in a paused state, so this function call is required for audio playback to begin on such device.
| stream | the audio stream associated with the audio device to resume. |
| Error | on failure. |
|
inline |
This function unpauses audio processing for a given device that has previously been paused. Once unpaused, any bound audio streams will begin to progress again, and audio can be generated.
Remember, AudioStream.AudioStream opens device in a paused state, so this function call is required for audio playback to begin on such device.
| Error | on failure. |
|
inline |
The gain of a device is its volume; a larger gain means a louder output, with a gain of zero being silence.
Audio devices default to a gain of 1.0f (no change in output).
Physical devices may not have their gain changed, only logical devices, and this function will always return false when used on physical devices. While it might seem attractive to adjust several logical devices at once in this way, it would allow an app or library to interfere with another portion of the program's otherwise-isolated devices.
This is applied, along with any per-audiostream gain, during playback to the hardware, and can be continuously changed to create various effects. On recording devices, this will adjust the gain before passing the data into an audiostream; that recording audiostream can then adjust its gain further when outputting the data elsewhere, if it likes, but that second gain is not applied until the data leaves the audiostream again.
| devid | the audio device on which to change gain. |
| gain | the gain. 1.0f is no change, 0.0f is silence. |
| Error | on failure. |
|
inline |
This is useful for accessing the final mix, perhaps for writing a visualizer or applying a final effect to the audio data before playback.
The buffer is the final mix of all bound audio streams on an opened device; this callback will fire regularly for any device that is both opened and unpaused. If there is no new data to mix, either because no streams are bound to the device or all the streams are empty, this callback will still fire with the entire buffer set to silence.
This callback is allowed to make changes to the data; the contents of the buffer after this call is what is ultimately passed along to the hardware.
The callback is always provided the data in float format (values from -1.0f to 1.0f), but the number of channels or sample rate may be different than the format the app requested when opening the device; SDL might have had to manage a conversion behind the scenes, or the playback might have jumped to new physical hardware when a system default changed, etc. These details may change between calls. Accordingly, the size of the buffer might change between calls as well.
This callback can run at any time, and from any thread; if you need to serialize access to your app's data, you should provide and use a mutex or other synchronization device.
All of this to say: there are specific needs this callback can fulfill, but it is not the simplest interface. Apps should generally provide audio in their preferred format through an AudioStream and let SDL handle the difference.
This function is extremely time-sensitive; the callback should do the least amount of work possible and return as quickly as it can. The longer the callback runs, the higher the risk of audio dropouts or other problems.
This function will block until the audio device is in between iterations, so any existing callback that might be running will finish before this function sets the new callback and returns.
Setting a nullptr callback function disables any previously-set callback.
| devid | the ID of an opened audio device. |
| callback | a callback function to be called. Can be nullptr. |
| userdata | app-controlled pointer passed to callback. Can be nullptr. |
| Error | on failure. |
|
inline |
This is useful for accessing the final mix, perhaps for writing a visualizer or applying a final effect to the audio data before playback.
The buffer is the final mix of all bound audio streams on an opened device; this callback will fire regularly for any device that is both opened and unpaused. If there is no new data to mix, either because no streams are bound to the device or all the streams are empty, this callback will still fire with the entire buffer set to silence.
This callback is allowed to make changes to the data; the contents of the buffer after this call is what is ultimately passed along to the hardware.
The callback is always provided the data in float format (values from -1.0f to 1.0f), but the number of channels or sample rate may be different than the format the app requested when opening the device; SDL might have had to manage a conversion behind the scenes, or the playback might have jumped to new physical hardware when a system default changed, etc. These details may change between calls. Accordingly, the size of the buffer might change between calls as well.
This callback can run at any time, and from any thread; if you need to serialize access to your app's data, you should provide and use a mutex or other synchronization device.
All of this to say: there are specific needs this callback can fulfill, but it is not the simplest interface. Apps should generally provide audio in their preferred format through an AudioStream and let SDL handle the difference.
This function is extremely time-sensitive; the callback should do the least amount of work possible and return as quickly as it can. The longer the callback runs, the higher the risk of audio dropouts or other problems.
This function will block until the audio device is in between iterations, so any existing callback that might be running will finish before this function sets the new callback and returns.
Setting a nullptr callback function disables any previously-set callback.
| devid | the ID of an opened audio device. |
| callback | a callback function to be called. Can be nullptr. |
| Error | on failure. |
|
inline |
Future calls to and AudioStream.GetAvailable and AudioStream.GetData will reflect the new format, and future calls to AudioStream.PutData must provide data in the new input formats.
Data that was previously queued in the stream will still be operated on in the format that was current when it was added, which is to say you can put the end of a sound file in one format to a stream, change formats for the next sound file, and start putting that new data while the previous sound file is still queued, and everything will still play back correctly.
If a stream is bound to a device, then the format of the side of the stream bound to a device cannot be changed (src_spec for recording devices, dst_spec for playback devices). Attempts to make a change to this side will be ignored, but this will not report an error. The other side's format can be changed.
| stream | the stream the format is being changed. |
| src_spec | the new format of the audio input; if nullptr, it is not changed. |
| dst_spec | the new format of the audio output; if nullptr, it is not changed. |
| Error | on failure. |
|
inline |
The frequency ratio is used to adjust the rate at which input data is consumed. Changing this effectively modifies the speed and pitch of the audio. A value greater than 1.0 will play the audio faster, and at a higher pitch. A value less than 1.0 will play the audio slower, and at a lower pitch.
This is applied during AudioStream.GetData, and can be continuously changed to create various effects.
| stream | the stream the frequency ratio is being changed. |
| ratio | the frequency ratio. 1.0 is normal speed. Must be between 0.01 and 100. |
| Error | on failure. |
|
inline |
The gain of a stream is its volume; a larger gain means a louder output, with a gain of zero being silence.
Audio streams default to a gain of 1.0f (no change in output).
This is applied during AudioStream.GetData, and can be continuously changed to create various effects.
| stream | the stream on which the gain is being changed. |
| gain | the gain. 1.0f is no change, 0.0f is silence. |
| Error | on failure. |
|
inline |
This callback is called before data is obtained from the stream, giving the callback the chance to add more on-demand.
The callback can (optionally) call AudioStream.PutData() to add more audio to the stream during this call; if needed, the request that triggered this callback will obtain the new data immediately.
The callback's additional_amount argument is roughly how many bytes of unconverted data (in the stream's input format) is needed by the caller, although this may overestimate a little for safety. This takes into account how much is already in the stream and only asks for any extra necessary to resolve the request, which means the callback may be asked for zero bytes, and a different amount on each call.
The callback is not required to supply exact amounts; it is allowed to supply too much or too little or none at all. The caller will get what's available, up to the amount they requested, regardless of this callback's outcome.
Clearing or flushing an audio stream does not call this callback.
This function obtains the stream's lock, which means any existing callback (get or put) in progress will finish running before setting the new callback.
Setting a nullptr function turns off the callback.
| stream | the audio stream to set the new callback on. |
| callback | the new callback function to call when data is requested from the stream. |
| userdata | an opaque pointer provided to the callback for its own personal use. |
| Error | on failure. |
|
inline |
This callback is called before data is obtained from the stream, giving the callback the chance to add more on-demand.
The callback can (optionally) call AudioStream.PutData() to add more audio to the stream during this call; if needed, the request that triggered this callback will obtain the new data immediately.
The callback's additional_amount argument is roughly how many bytes of unconverted data (in the stream's input format) is needed by the caller, although this may overestimate a little for safety. This takes into account how much is already in the stream and only asks for any extra necessary to resolve the request, which means the callback may be asked for zero bytes, and a different amount on each call.
The callback is not required to supply exact amounts; it is allowed to supply too much or too little or none at all. The caller will get what's available, up to the amount they requested, regardless of this callback's outcome.
Clearing or flushing an audio stream does not call this callback.
This function obtains the stream's lock, which means any existing callback (get or put) in progress will finish running before setting the new callback.
Setting a nullptr function turns off the callback.
| stream | the audio stream to set the new callback on. |
| callback | the new callback function to call when data is requested from the stream. |
| Error | on failure. |
|
inline |
Channel maps are optional; most things do not need them, instead passing data in the order that SDL expects.
The input channel map reorders data that is added to a stream via AudioStream.PutData. Future calls to AudioStream.PutData must provide data in the new channel order.
Each item in the array represents an input channel, and its value is the channel that it should be remapped to. To reverse a stereo signal's left and right values, you'd have an array of { 1, 0 }. It is legal to remap multiple channels to the same thing, so { 1, 1 } would duplicate the right channel to both channels of a stereo signal. An element in the channel map set to -1 instead of a valid channel will mute that channel, setting it to a silence value.
You cannot change the number of channels through a channel map, just reorder/mute them.
Data that was previously queued in the stream will still be operated on in the order that was current when it was added, which is to say you can put the end of a sound file in one order to a stream, change orders for the next sound file, and start putting that new data while the previous sound file is still queued, and everything will still play back correctly.
Audio streams default to no remapping applied. Passing a nullptr channel map is legal, and turns off remapping.
SDL will copy the channel map; the caller does not have to save this array after this call.
If count is not equal to the current number of channels in the audio stream's format, this will fail. This is a safety measure to make sure a race condition hasn't changed the format while this call is setting the channel map.
Unlike attempting to change the stream's format, the input channel map on a stream bound to a recording device is permitted to change at any time; any data added to the stream from the device after this call will have the new mapping, but previously-added data will still have the prior mapping.
| stream | the AudioStream to change. |
| chmap | the new channel map, nullptr to reset to default. |
| Error | on failure. |
|
inline |
Channel maps are optional; most things do not need them, instead passing data in the order that SDL expects.
The output channel map reorders data that leaving a stream via AudioStream.GetData.
Each item in the array represents an input channel, and its value is the channel that it should be remapped to. To reverse a stereo signal's left and right values, you'd have an array of { 1, 0 }. It is legal to remap multiple channels to the same thing, so { 1, 1 } would duplicate the right channel to both channels of a stereo signal. An element in the channel map set to -1 instead of a valid channel will mute that channel, setting it to a silence value.
You cannot change the number of channels through a channel map, just reorder/mute them.
The output channel map can be changed at any time, as output remapping is applied during AudioStream.GetData.
Audio streams default to no remapping applied. Passing a nullptr channel map is legal, and turns off remapping.
SDL will copy the channel map; the caller does not have to save this array after this call.
If count is not equal to the current number of channels in the audio stream's format, this will fail. This is a safety measure to make sure a race condition hasn't changed the format while this call is setting the channel map.
Unlike attempting to change the stream's format, the output channel map on a stream bound to a recording device is permitted to change at any time; any data added to the stream after this call will have the new mapping, but previously-added data will still have the prior mapping. When the channel map doesn't match the hardware's channel layout, SDL will convert the data before feeding it to the device for playback.
| stream | the AudioStream to change. |
| chmap | the new channel map, nullptr to reset to default. |
| Error | on failure. |
|
inline |
This callback is called after the data is added to the stream, giving the callback the chance to obtain it immediately.
The callback can (optionally) call AudioStream.GetData() to obtain audio from the stream during this call.
The callback's additional_amount argument is how many bytes of converted data (in the stream's output format) was provided by the caller, although this may underestimate a little for safety. This value might be less than what is currently available in the stream, if data was already there, and might be less than the caller provided if the stream needs to keep a buffer to aid in resampling. Which means the callback may be provided with zero bytes, and a different amount on each call.
The callback may call AudioStream.GetAvailable to see the total amount currently available to read from the stream, instead of the total provided by the current call.
The callback is not required to obtain all data. It is allowed to read less or none at all. Anything not read now simply remains in the stream for later access.
Clearing or flushing an audio stream does not call this callback.
This function obtains the stream's lock, which means any existing callback (get or put) in progress will finish running before setting the new callback.
Setting a nullptr function turns off the callback.
| stream | the audio stream to set the new callback on. |
| callback | the new callback function to call when data is added to the stream. |
| userdata | an opaque pointer provided to the callback for its own personal use. |
| Error | on failure. |
|
inline |
This callback is called after the data is added to the stream, giving the callback the chance to obtain it immediately.
The callback can (optionally) call AudioStream.GetData() to obtain audio from the stream during this call.
The callback's additional_amount argument is how many bytes of converted data (in the stream's output format) was provided by the caller, although this may underestimate a little for safety. This value might be less than what is currently available in the stream, if data was already there, and might be less than the caller provided if the stream needs to keep a buffer to aid in resampling. Which means the callback may be provided with zero bytes, and a different amount on each call.
The callback may call AudioStream.GetAvailable to see the total amount currently available to read from the stream, instead of the total provided by the current call.
The callback is not required to obtain all data. It is allowed to read less or none at all. Anything not read now simply remains in the stream for later access.
Clearing or flushing an audio stream does not call this callback.
This function obtains the stream's lock, which means any existing callback (get or put) in progress will finish running before setting the new callback.
Setting a nullptr function turns off the callback.
| stream | the audio stream to set the new callback on. |
| callback | the new callback function to call when data is added to the stream. |
| Error | on failure. |
|
inline |
Future calls to and AudioStream.GetAvailable and AudioStream.GetData will reflect the new format, and future calls to AudioStream.PutData must provide data in the new input formats.
Data that was previously queued in the stream will still be operated on in the format that was current when it was added, which is to say you can put the end of a sound file in one format to a stream, change formats for the next sound file, and start putting that new data while the previous sound file is still queued, and everything will still play back correctly.
If a stream is bound to a device, then the format of the side of the stream bound to a device cannot be changed (src_spec for recording devices, dst_spec for playback devices). Attempts to make a change to this side will be ignored, but this will not report an error. The other side's format can be changed.
| src_spec | the new format of the audio input; if nullptr, it is not changed. |
| dst_spec | the new format of the audio output; if nullptr, it is not changed. |
| Error | on failure. |
|
inline |
The frequency ratio is used to adjust the rate at which input data is consumed. Changing this effectively modifies the speed and pitch of the audio. A value greater than 1.0 will play the audio faster, and at a higher pitch. A value less than 1.0 will play the audio slower, and at a lower pitch.
This is applied during AudioStream.GetData, and can be continuously changed to create various effects.
| ratio | the frequency ratio. 1.0 is normal speed. Must be between 0.01 and 100. |
| Error | on failure. |
|
inline |
The gain of a device is its volume; a larger gain means a louder output, with a gain of zero being silence.
Audio devices default to a gain of 1.0f (no change in output).
Physical devices may not have their gain changed, only logical devices, and this function will always return false when used on physical devices. While it might seem attractive to adjust several logical devices at once in this way, it would allow an app or library to interfere with another portion of the program's otherwise-isolated devices.
This is applied, along with any per-audiostream gain, during playback to the hardware, and can be continuously changed to create various effects. On recording devices, this will adjust the gain before passing the data into an audiostream; that recording audiostream can then adjust its gain further when outputting the data elsewhere, if it likes, but that second gain is not applied until the data leaves the audiostream again.
| gain | the gain. 1.0f is no change, 0.0f is silence. |
| Error | on failure. |
|
inline |
The gain of a stream is its volume; a larger gain means a louder output, with a gain of zero being silence.
Audio streams default to a gain of 1.0f (no change in output).
This is applied during AudioStream.GetData, and can be continuously changed to create various effects.
| gain | the gain. 1.0f is no change, 0.0f is silence. |
| Error | on failure. |
|
inline |
This callback is called before data is obtained from the stream, giving the callback the chance to add more on-demand.
The callback can (optionally) call AudioStream.PutData() to add more audio to the stream during this call; if needed, the request that triggered this callback will obtain the new data immediately.
The callback's additional_amount argument is roughly how many bytes of unconverted data (in the stream's input format) is needed by the caller, although this may overestimate a little for safety. This takes into account how much is already in the stream and only asks for any extra necessary to resolve the request, which means the callback may be asked for zero bytes, and a different amount on each call.
The callback is not required to supply exact amounts; it is allowed to supply too much or too little or none at all. The caller will get what's available, up to the amount they requested, regardless of this callback's outcome.
Clearing or flushing an audio stream does not call this callback.
This function obtains the stream's lock, which means any existing callback (get or put) in progress will finish running before setting the new callback.
Setting a nullptr function turns off the callback.
| callback | the new callback function to call when data is requested from the stream. |
| userdata | an opaque pointer provided to the callback for its own personal use. |
| Error | on failure. |
|
inline |
This callback is called before data is obtained from the stream, giving the callback the chance to add more on-demand.
The callback can (optionally) call AudioStream.PutData() to add more audio to the stream during this call; if needed, the request that triggered this callback will obtain the new data immediately.
The callback's additional_amount argument is roughly how many bytes of unconverted data (in the stream's input format) is needed by the caller, although this may overestimate a little for safety. This takes into account how much is already in the stream and only asks for any extra necessary to resolve the request, which means the callback may be asked for zero bytes, and a different amount on each call.
The callback is not required to supply exact amounts; it is allowed to supply too much or too little or none at all. The caller will get what's available, up to the amount they requested, regardless of this callback's outcome.
Clearing or flushing an audio stream does not call this callback.
This function obtains the stream's lock, which means any existing callback (get or put) in progress will finish running before setting the new callback.
Setting a nullptr function turns off the callback.
| callback | the new callback function to call when data is requested from the stream. |
| Error | on failure. |
|
inline |
Channel maps are optional; most things do not need them, instead passing data in the order that SDL expects.
The input channel map reorders data that is added to a stream via AudioStream.PutData. Future calls to AudioStream.PutData must provide data in the new channel order.
Each item in the array represents an input channel, and its value is the channel that it should be remapped to. To reverse a stereo signal's left and right values, you'd have an array of { 1, 0 }. It is legal to remap multiple channels to the same thing, so { 1, 1 } would duplicate the right channel to both channels of a stereo signal. An element in the channel map set to -1 instead of a valid channel will mute that channel, setting it to a silence value.
You cannot change the number of channels through a channel map, just reorder/mute them.
Data that was previously queued in the stream will still be operated on in the order that was current when it was added, which is to say you can put the end of a sound file in one order to a stream, change orders for the next sound file, and start putting that new data while the previous sound file is still queued, and everything will still play back correctly.
Audio streams default to no remapping applied. Passing a nullptr channel map is legal, and turns off remapping.
SDL will copy the channel map; the caller does not have to save this array after this call.
If count is not equal to the current number of channels in the audio stream's format, this will fail. This is a safety measure to make sure a race condition hasn't changed the format while this call is setting the channel map.
Unlike attempting to change the stream's format, the input channel map on a stream bound to a recording device is permitted to change at any time; any data added to the stream from the device after this call will have the new mapping, but previously-added data will still have the prior mapping.
| chmap | the new channel map, nullptr to reset to default. |
| Error | on failure. |
|
inline |
Channel maps are optional; most things do not need them, instead passing data in the order that SDL expects.
The output channel map reorders data that leaving a stream via AudioStream.GetData.
Each item in the array represents an input channel, and its value is the channel that it should be remapped to. To reverse a stereo signal's left and right values, you'd have an array of { 1, 0 }. It is legal to remap multiple channels to the same thing, so { 1, 1 } would duplicate the right channel to both channels of a stereo signal. An element in the channel map set to -1 instead of a valid channel will mute that channel, setting it to a silence value.
You cannot change the number of channels through a channel map, just reorder/mute them.
The output channel map can be changed at any time, as output remapping is applied during AudioStream.GetData.
Audio streams default to no remapping applied. Passing a nullptr channel map is legal, and turns off remapping.
SDL will copy the channel map; the caller does not have to save this array after this call.
If count is not equal to the current number of channels in the audio stream's format, this will fail. This is a safety measure to make sure a race condition hasn't changed the format while this call is setting the channel map.
Unlike attempting to change the stream's format, the output channel map on a stream bound to a recording device is permitted to change at any time; any data added to the stream after this call will have the new mapping, but previously-added data will still have the prior mapping. When the channel map doesn't match the hardware's channel layout, SDL will convert the data before feeding it to the device for playback.
| chmap | the new channel map, nullptr to reset to default. |
| Error | on failure. |
|
inline |
This is useful for accessing the final mix, perhaps for writing a visualizer or applying a final effect to the audio data before playback.
The buffer is the final mix of all bound audio streams on an opened device; this callback will fire regularly for any device that is both opened and unpaused. If there is no new data to mix, either because no streams are bound to the device or all the streams are empty, this callback will still fire with the entire buffer set to silence.
This callback is allowed to make changes to the data; the contents of the buffer after this call is what is ultimately passed along to the hardware.
The callback is always provided the data in float format (values from -1.0f to 1.0f), but the number of channels or sample rate may be different than the format the app requested when opening the device; SDL might have had to manage a conversion behind the scenes, or the playback might have jumped to new physical hardware when a system default changed, etc. These details may change between calls. Accordingly, the size of the buffer might change between calls as well.
This callback can run at any time, and from any thread; if you need to serialize access to your app's data, you should provide and use a mutex or other synchronization device.
All of this to say: there are specific needs this callback can fulfill, but it is not the simplest interface. Apps should generally provide audio in their preferred format through an AudioStream and let SDL handle the difference.
This function is extremely time-sensitive; the callback should do the least amount of work possible and return as quickly as it can. The longer the callback runs, the higher the risk of audio dropouts or other problems.
This function will block until the audio device is in between iterations, so any existing callback that might be running will finish before this function sets the new callback and returns.
Setting a nullptr callback function disables any previously-set callback.
| callback | a callback function to be called. Can be nullptr. |
| userdata | app-controlled pointer passed to callback. Can be nullptr. |
| Error | on failure. |
|
inline |
This is useful for accessing the final mix, perhaps for writing a visualizer or applying a final effect to the audio data before playback.
The buffer is the final mix of all bound audio streams on an opened device; this callback will fire regularly for any device that is both opened and unpaused. If there is no new data to mix, either because no streams are bound to the device or all the streams are empty, this callback will still fire with the entire buffer set to silence.
This callback is allowed to make changes to the data; the contents of the buffer after this call is what is ultimately passed along to the hardware.
The callback is always provided the data in float format (values from -1.0f to 1.0f), but the number of channels or sample rate may be different than the format the app requested when opening the device; SDL might have had to manage a conversion behind the scenes, or the playback might have jumped to new physical hardware when a system default changed, etc. These details may change between calls. Accordingly, the size of the buffer might change between calls as well.
This callback can run at any time, and from any thread; if you need to serialize access to your app's data, you should provide and use a mutex or other synchronization device.
All of this to say: there are specific needs this callback can fulfill, but it is not the simplest interface. Apps should generally provide audio in their preferred format through an AudioStream and let SDL handle the difference.
This function is extremely time-sensitive; the callback should do the least amount of work possible and return as quickly as it can. The longer the callback runs, the higher the risk of audio dropouts or other problems.
This function will block until the audio device is in between iterations, so any existing callback that might be running will finish before this function sets the new callback and returns.
Setting a nullptr callback function disables any previously-set callback.
| callback | a callback function to be called. Can be nullptr. |
| Error | on failure. |
|
inline |
This callback is called after the data is added to the stream, giving the callback the chance to obtain it immediately.
The callback can (optionally) call AudioStream.GetData() to obtain audio from the stream during this call.
The callback's additional_amount argument is how many bytes of converted data (in the stream's output format) was provided by the caller, although this may underestimate a little for safety. This value might be less than what is currently available in the stream, if data was already there, and might be less than the caller provided if the stream needs to keep a buffer to aid in resampling. Which means the callback may be provided with zero bytes, and a different amount on each call.
The callback may call AudioStream.GetAvailable to see the total amount currently available to read from the stream, instead of the total provided by the current call.
The callback is not required to obtain all data. It is allowed to read less or none at all. Anything not read now simply remains in the stream for later access.
Clearing or flushing an audio stream does not call this callback.
This function obtains the stream's lock, which means any existing callback (get or put) in progress will finish running before setting the new callback.
Setting a nullptr function turns off the callback.
| callback | the new callback function to call when data is added to the stream. |
| userdata | an opaque pointer provided to the callback for its own personal use. |
| Error | on failure. |
|
inline |
This callback is called after the data is added to the stream, giving the callback the chance to obtain it immediately.
The callback can (optionally) call AudioStream.GetData() to obtain audio from the stream during this call.
The callback's additional_amount argument is how many bytes of converted data (in the stream's output format) was provided by the caller, although this may underestimate a little for safety. This value might be less than what is currently available in the stream, if data was already there, and might be less than the caller provided if the stream needs to keep a buffer to aid in resampling. Which means the callback may be provided with zero bytes, and a different amount on each call.
The callback may call AudioStream.GetAvailable to see the total amount currently available to read from the stream, instead of the total provided by the current call.
The callback is not required to obtain all data. It is allowed to read less or none at all. Anything not read now simply remains in the stream for later access.
Clearing or flushing an audio stream does not call this callback.
This function obtains the stream's lock, which means any existing callback (get or put) in progress will finish running before setting the new callback.
Setting a nullptr function turns off the callback.
| callback | the new callback function to call when data is added to the stream. |
| Error | on failure. |
|
inline |
This is a convenience function, equivalent to calling UnbindAudioStreams(&stream, 1).
|
inline |
This is a convenience function, equivalent to calling UnbindAudioStreams(&stream, 1).
| stream | an audio stream to unbind from a device. Can be nullptr. |
|
inline |
The streams being unbound do not all have to be on the same device. All streams on the same device will be unbound atomically (data will stop flowing through all unbound streams on the same device at the same time).
Unbinding a stream that isn't bound to a device is a legal no-op.
| streams | an array of audio streams to unbind. Can be nullptr or contain nullptr. |
|
inline |
This unlocks an audio stream after a call to AudioStream.Lock.
| Error | on failure. |
|
inline |
This unlocks an audio stream after a call to AudioStream.Lock.
| stream | the audio stream to unlock. |
| Error | on failure. |
|
constexpr |
Several functions that require an AudioDevice will accept this value to signify the app just wants the system to choose a default device instead of the app providing a specific one.
|
constexpr |
Several functions that require an AudioDevice will accept this value to signify the app just wants the system to choose a default device instead of the app providing a specific one.
|
constexpr |
|
constexpr |
|
constexpr |
Generally one should use AudioFormat.IsBigEndian or AudioFormat.IsLittleEndian instead of this constant directly.
|
constexpr |
Generally one should use AudioFormat.GetBitSize instead of this constant directly.
|
constexpr |
Generally one should use AudioFormat.IsFloat instead of this constant directly.
|
constexpr |
Generally one should use AudioFormat.IsSigned instead of this constant directly.
|
constexpr |
|
constexpr |
|
constexpr |