Skip to content

Tutorials and Examples

Tutorials and Examples

This section provides a collection of tutorials and examples to help you learn how to use SoundFlow for various audio processing tasks. Each tutorial provides step-by-step instructions and explanations, while the examples offer ready-to-run code snippets that demonstrate specific features.

Tutorials

Playback

1. Basic Playback

This tutorial demonstrates how to play an audio file from disk using SoundPlayer and StreamDataProvider.

Steps:

  1. Create a new console application:

    dotnet new console -o BasicPlayback
    cd BasicPlayback
  2. Install the SoundFlow NuGet package:

    dotnet add package SoundFlow
  3. Replace the contents of Program.cs with the following code:

    using SoundFlow.Abstracts;
    using SoundFlow.Backends.MiniAudio;
    using SoundFlow.Components;
    using SoundFlow.Enums;
    using SoundFlow.Providers;
    namespace BasicPlayback;
    internal static class Program
    {
    private static void Main(string[] args)
    {
    // Initialize the audio engine with the MiniAudio backend.
    using var audioEngine = new MiniAudioEngine(44100, Capability.Playback);
    // Create a SoundPlayer and load an audio file.
    // Replace "path/to/your/audiofile.wav" with the actual path to your audio file.
    var player = new SoundPlayer(new StreamDataProvider(File.OpenRead("path/to/your/audiofile.wav")));
    // Add the player to the master mixer.
    Mixer.Master.AddComponent(player);
    // Start playback.
    player.Play();
    // Keep the console application running until playback finishes or the user presses a key.
    Console.WriteLine("Playing audio... Press any key to stop.");
    Console.ReadKey();
    // Stop playback.
    player.Stop();
    // Remove the player from the mixer.
    Mixer.Master.RemoveComponent(player);
    }
    }
  4. Replace "path/to/your/audiofile.wav" with the actual path to an audio file on your computer.

  5. Build and run the application:

    dotnet run

Explanation:

This code initializes the AudioEngine, creates a SoundPlayer that loads an audio file using StreamDataProvider, adds the player to the Master mixer, and starts playback. The console application then waits for the user to press a key before stopping playback and cleaning up.

2. Web Playback

This tutorial demonstrates how to play an audio stream from a URL using SoundPlayer and StreamDataProvider.

Steps:

  1. Create a new console application:

    dotnet new console -o WebPlayback
    cd WebPlayback
  2. Install the SoundFlow NuGet package:

    dotnet add package SoundFlow
  3. Replace the contents of Program.cs with the following code:

    using SoundFlow.Abstracts;
    using SoundFlow.Backends.MiniAudio;
    using SoundFlow.Components;
    using SoundFlow.Enums;
    using SoundFlow.Providers;
    using System.Net.Http;
    namespace WebPlayback;
    internal static class Program
    {
    private static async Task Main(string[] args)
    {
    // Initialize the audio engine.
    using var audioEngine = new MiniAudioEngine(44100, Capability.Playback);
    // Create an HttpClient to fetch the audio stream.
    using var httpClient = new HttpClient();
    // Replace "your-audio-stream-url" with the actual URL of an audio stream (e.g., an internet radio station).
    var stream = await httpClient.GetStreamAsync("your-audio-stream-url");
    // Create a SoundPlayer and load the stream.
    var player = new SoundPlayer(new StreamDataProvider(stream));
    // Add the player to the master mixer.
    Mixer.Master.AddComponent(player);
    // Start playback.
    player.Play();
    // Keep the console application running until the user presses a key.
    Console.WriteLine("Playing stream... Press any key to stop.");
    Console.ReadKey();
    // Stop playback.
    player.Stop();
    // Remove the player from the mixer.
    Mixer.Master.RemoveComponent(player);
    }
    }
  4. Replace "your-audio-stream-url" with the actual URL of an audio stream.

  5. Build and run the application:

    dotnet run

Explanation:

This code initializes the AudioEngine, creates an HttpClient to download the audio stream from the provided URL, creates a SoundPlayer that uses a StreamDataProvider to read from the stream, adds the player to the Master mixer, and starts playback. The async and await keywords are used to perform the network operation asynchronously without blocking the main thread.

3. Playback Control

This tutorial demonstrates how to control audio playback using Play, Pause, Stop, and Seek.

Steps:

  1. Create a new console application:

    dotnet new console -o PlaybackControl
    cd PlaybackControl
  2. Install the SoundFlow NuGet package:

    dotnet add package SoundFlow
  3. Replace the contents of Program.cs with the following code:

    using SoundFlow.Abstracts;
    using SoundFlow.Backends.MiniAudio;
    using SoundFlow.Components;
    using SoundFlow.Enums;
    using SoundFlow.Providers;
    namespace PlaybackControl;
    internal static class Program
    {
    private static void Main(string[] args)
    {
    // Initialize the audio engine.
    using var audioEngine = new MiniAudioEngine(44100, Capability.Playback);
    // Create a SoundPlayer and load an audio file.
    var player = new SoundPlayer(new StreamDataProvider(File.OpenRead("path/to/your/audiofile.wav")));
    // Add the player to the master mixer.
    Mixer.Master.AddComponent(player);
    // Start playback.
    player.Play();
    Console.WriteLine("Playing audio... (p: pause, r: resume, s: seek, any other key: stop)");
    // Handle user input for playback control.
    while (player.State != PlaybackState.Stopped)
    {
    var keyInfo = Console.ReadKey(true);
    switch (keyInfo.Key)
    {
    case ConsoleKey.P:
    player.Pause();
    Console.WriteLine("Paused");
    break;
    case ConsoleKey.R:
    player.Play();
    Console.WriteLine("Resumed");
    break;
    case ConsoleKey.S:
    Console.Write("Enter seek time (in seconds): ");
    if (float.TryParse(Console.ReadLine(), out var seekTime))
    {
    player.Seek(seekTime);
    Console.WriteLine($"Seeked to {seekTime} seconds");
    }
    else
    {
    Console.WriteLine("Invalid seek time.");
    }
    break;
    default:
    player.Stop();
    Console.WriteLine("Stopped");
    break;
    }
    }
    // Remove the player from the mixer.
    Mixer.Master.RemoveComponent(player);
    }
    }
  4. Replace "path/to/your/audiofile.wav" with the actual path to an audio file.

  5. Build and run the application:

    dotnet run

Explanation:

This code initializes the AudioEngine, creates a SoundPlayer, adds it to the Master mixer, and starts playback. It then enters a loop that handles user input for playback control:

  • P: Pauses playback.
  • R: Resumes playback.
  • S: Prompts the user for a seek time (in seconds) and seeks to that position.
  • Any other key: Stops playback.

4. Looping

This tutorial demonstrates how to enable looping for a SoundPlayer.

Steps:

  1. Create a new console application:

    dotnet new console -o LoopingPlayback
    cd LoopingPlayback
  2. Install the SoundFlow NuGet package:

    dotnet add package SoundFlow
  3. Replace the contents of Program.cs with the following code:

    using SoundFlow.Abstracts;
    using SoundFlow.Backends.MiniAudio;
    using SoundFlow.Components;
    using SoundFlow.Enums;
    using SoundFlow.Providers;
    namespace LoopingPlayback;
    internal static class Program
    {
    private static void Main(string[] args)
    {
    // Initialize the audio engine.
    using var audioEngine = new MiniAudioEngine(44100, Capability.Playback);
    // Create a SoundPlayer and load an audio file.
    var player = new SoundPlayer(new StreamDataProvider(File.OpenRead("path/to/your/audiofile.wav")));
    // Enable looping.
    player.IsLooping = true;
    // Add the player to the master mixer.
    Mixer.Master.AddComponent(player);
    // Start playback.
    player.Play();
    // Keep the console application running until the user presses a key.
    Console.WriteLine("Playing audio in a loop... Press any key to stop.");
    Console.ReadKey();
    // Stop playback.
    player.Stop();
    // Remove the player from the mixer.
    Mixer.Master.RemoveComponent(player);
    }
    }
  4. Replace "path/to/your/audiofile.wav" with the actual path to an audio file.

  5. Build and run the application:

    dotnet run

Explanation:

This code is similar to the basic playback example, but it sets the IsLooping property of the SoundPlayer to true. This causes the player to automatically restart playback from the beginning when it reaches the end of the audio data, creating a continuous loop.

5. Surround Sound

This tutorial demonstrates how to use SurroundPlayer to play audio with surround sound configurations.

Steps:

  1. Create a new console application:

    dotnet new console -o SurroundPlayback
    cd SurroundPlayback
  2. Install the SoundFlow NuGet package:

    dotnet add package SoundFlow
  3. Replace the contents of Program.cs with the following code:

    using SoundFlow.Abstracts;
    using SoundFlow.Backends.MiniAudio;
    using SoundFlow.Components;
    using SoundFlow.Enums;
    using SoundFlow.Providers;
    using System.Numerics;
    namespace SurroundPlayback;
    internal static class Program
    {
    private static void Main(string[] args)
    {
    // Initialize the audio engine with 8 channels for 7.1 surround sound.
    using var audioEngine = new MiniAudioEngine(44100, Capability.Playback, channels: 8);
    // Create a SurroundPlayer and load an audio file.
    // Make sure the file has a compatible number of channels.
    var player = new SurroundPlayer(new StreamDataProvider(File.OpenRead("path/to/your/multichannel/audiofile.wav")));
    // Configure the SurroundPlayer for 7.1 surround sound.
    player.SpeakerConfig = SurroundPlayer.SpeakerConfiguration.Surround71;
    // Set the panning method to VBAP.
    player.Panning = SurroundPlayer.PanningMethod.Vbap;
    // Set the listener position (optional).
    player.ListenerPosition = new Vector2(0, 0); // Listener is in the center
    // Add the player to the master mixer.
    Mixer.Master.AddComponent(player);
    // Start playback.
    player.Play();
    // Keep the console application running until playback finishes or the user presses a key.
    Console.WriteLine("Playing surround sound audio... Press any key to stop.");
    Console.ReadKey();
    // Stop playback.
    player.Stop();
    // Remove the player from the mixer.
    Mixer.Master.RemoveComponent(player);
    }
    }
  4. Replace "path/to/your/multichannel/audiofile.wav" with the actual path to a multi-channel audio file.

  5. Build and run the application:

    dotnet run

Explanation:

This code initializes the AudioEngine with 8 channels to support 7.1 surround sound. It then creates a SurroundPlayer, loads a multi-channel audio file, configures the player for 7.1 surround sound using SpeakerConfig, sets the panning method to VBAP, and optionally sets the listener position. The rest of the code is similar to the basic playback example.

Note:

  • Make sure the number of channels in the audio file matches the chosen speaker configuration.
  • Experiment with different SpeakerConfiguration, PanningMethod, and ListenerPosition values to hear how they affect the surround sound experience.

6. Efficient Playback with ChunkedDataProvider

This tutorial demonstrates how to use the ChunkedDataProvider for efficient playback of large audio files.

Steps:

  1. Create a new console application:

    dotnet new console -o ChunkedPlayback
    cd ChunkedPlayback
  2. Install the SoundFlow NuGet package:

    dotnet add package SoundFlow
  3. Replace the contents of Program.cs with the following code:

    using SoundFlow.Abstracts;
    using SoundFlow.Backends.MiniAudio;
    using SoundFlow.Components;
    using SoundFlow.Enums;
    using SoundFlow.Providers;
    namespace ChunkedPlayback;
    internal static class Program
    {
    private static void Main(string[] args)
    {
    // Initialize the audio engine.
    using var audioEngine = new MiniAudioEngine(44100, Capability.Playback);
    // Create a ChunkedDataProvider and load a large audio file.
    // Replace "path/to/your/large/audiofile.wav" with the actual path to your audio file.
    using var dataProvider = new ChunkedDataProvider("path/to/your/large/audiofile.wav");
    // Create a SoundPlayer and use the ChunkedDataProvider.
    var player = new SoundPlayer(dataProvider);
    // Add the player to the master mixer.
    Mixer.Master.AddComponent(player);
    // Start playback.
    player.Play();
    // Keep the console application running until playback finishes or the user presses a key.
    Console.WriteLine("Playing audio with ChunkedDataProvider... Press any key to stop.");
    Console.ReadKey();
    // Stop playback.
    player.Stop();
    // Remove the player from the mixer.
    Mixer.Master.RemoveComponent(player);
    }
    }
  4. Replace "path/to/your/large/audiofile.wav" with the actual path to a large audio file on your computer.

  5. Build and run the application:

    dotnet run

Explanation:

This code initializes the AudioEngine, creates a ChunkedDataProvider that loads a large audio file, creates a SoundPlayer that uses the ChunkedDataProvider as its audio source, adds the player to the Master mixer, and starts playback. The ChunkedDataProvider will efficiently read and decode the audio file in chunks, preventing memory issues and ensuring smooth playback. The user can change the chunk size using the constructor of ChunkedDataProvider.

Note:

The ChunkedDataProvider is especially useful when working with very large audio files that would consume too much memory if loaded entirely into memory at once. It’s also beneficial when you need to seek within a large file, as it only decodes the necessary portions of the file.

7. Network Playback with NetworkDataProvider (Direct URL)

This tutorial demonstrates how to use the NetworkDataProvider to play audio from a direct URL.

Steps:

  1. Create a new console application:

    dotnet new console -o NetworkPlaybackDirect
    cd NetworkPlaybackDirect
  2. Install the SoundFlow NuGet package:

    dotnet add package SoundFlow
  3. Replace the contents of Program.cs with the following code:

    using SoundFlow.Abstracts;
    using SoundFlow.Backends.MiniAudio;
    using SoundFlow.Components;
    using SoundFlow.Enums;
    using SoundFlow.Providers;
    namespace NetworkPlaybackDirect;
    internal static class Program
    {
    private static void Main(string[] args)
    {
    // Initialize the audio engine.
    using var audioEngine = new MiniAudioEngine(44100, Capability.Playback);
    // Create a NetworkDataProvider and provide the URL of an audio file.
    // Replace "your-direct-audio-url" with the actual URL.
    using var dataProvider = new NetworkDataProvider("your-direct-audio-url");
    // Create a SoundPlayer and use the NetworkDataProvider.
    var player = new SoundPlayer(dataProvider);
    // Add the player to the master mixer.
    Mixer.Master.AddComponent(player);
    // Start playback.
    player.Play();
    // Keep the console application running until playback finishes or the user presses a key.
    Console.WriteLine("Playing audio from a direct URL... Press any key to stop.");
    Console.ReadKey();
    // Stop playback.
    player.Stop();
    // Remove the player from the mixer.
    Mixer.Master.RemoveComponent(player);
    }
    }
  4. Replace "your-direct-audio-url" with the actual URL of an audio file on the internet.

  5. Build and run the application:

    dotnet run

Explanation:

This code initializes the AudioEngine, creates a NetworkDataProvider that connects to the specified URL, creates a SoundPlayer that uses the NetworkDataProvider as its audio source, adds the player to the Master mixer, and starts playback. The NetworkDataProvider will download and decode the audio data in the background, and the SoundPlayer will play it.

8. Network Playback with NetworkDataProvider (HLS Playlist)

This tutorial demonstrates how to use the NetworkDataProvider to play audio from an HLS playlist.

Steps:

  1. Create a new console application:

    dotnet new console -o NetworkPlaybackHls
    cd NetworkPlaybackHls
  2. Install the SoundFlow NuGet package:

    dotnet add package SoundFlow
  3. Replace the contents of Program.cs with the following code:

    using SoundFlow.Abstracts;
    using SoundFlow.Backends.MiniAudio;
    using SoundFlow.Components;
    using SoundFlow.Enums;
    using SoundFlow.Providers;
    namespace NetworkPlaybackHls;
    internal static class Program
    {
    private static void Main(string[] args)
    {
    // Initialize the audio engine.
    using var audioEngine = new MiniAudioEngine(44100, Capability.Playback);
    // Create a NetworkDataProvider and provide the URL of an HLS playlist.
    // Replace "your-hls-playlist-url" with the actual URL.
    using var dataProvider = new NetworkDataProvider("your-hls-playlist-url");
    // Create a SoundPlayer and use the NetworkDataProvider.
    var player = new SoundPlayer(dataProvider);
    // Add the player to the master mixer.
    Mixer.Master.AddComponent(player);
    // Start playback.
    player.Play();
    // Keep the console application running until playback finishes or the user presses a key.
    Console.WriteLine("Playing audio from an HLS playlist... Press any key to stop.");
    Console.ReadKey();
    // Stop playback.
    player.Stop();
    // Remove the player from the mixer.
    Mixer.Master.RemoveComponent(player);
    }
    }
  4. Replace "your-hls-playlist-url" with the actual URL of an HLS playlist (e.g., an .m3u8 file).

  5. Build and run the application:

    dotnet run

Explanation:

This code is similar to the previous example, but it uses an HLS playlist URL. The NetworkDataProvider will automatically detect that it’s an HLS stream, download and parse the playlist, and then download and decode the individual media segments. The SoundPlayer will play the audio seamlessly as the segments are downloaded.

Recording

1. Basic Recording

This tutorial demonstrates how to record audio from the default recording device and save it to a WAV file.

Steps:

  1. Create a new console application:

    dotnet new console -o BasicRecording
    cd BasicRecording
  2. Install the SoundFlow NuGet package:

    dotnet add package SoundFlow
  3. Replace the contents of Program.cs with the following code:

    using SoundFlow.Abstracts;
    using SoundFlow.Backends.MiniAudio;
    using SoundFlow.Components;
    using SoundFlow.Enums;
    namespace BasicRecording;
    internal static class Program
    {
    private static void Main(string[] args)
    {
    // Initialize the audio engine for recording with a 44.1kHz sample rate.
    using var audioEngine = new MiniAudioEngine(44100, Capability.Record);
    // Create a Recorder instance, specifying the output file path and encoding format.
    using var recorder = new Recorder("output.wav", sampleRate: 44100, encodingFormat: EncodingFormat.Wav);
    // Start recording.
    Console.WriteLine("Recording... Press any key to stop.");
    recorder.StartRecording();
    // Wait for the user to press a key.
    Console.ReadKey();
    // Stop recording.
    recorder.StopRecording();
    Console.WriteLine("Recording stopped. Saved to output.wav");
    }
    }
  4. Build and run the application:

    dotnet run

Explanation:

This code initializes the AudioEngine for recording, creates a Recorder instance that will save the recorded audio to output.wav using the WAV encoding format, starts recording, waits for the user to press a key, and then stops the recording.

2. Custom Processing

This tutorial demonstrates how to use a custom ProcessCallback to process recorded audio in real time.

Steps:

  1. Create a new console application:

    dotnet new console -o CustomProcessing
    cd CustomProcessing
  2. Install the SoundFlow NuGet package:

    dotnet add package SoundFlow
  3. Replace the contents of Program.cs with the following code:

    using SoundFlow.Abstracts;
    using SoundFlow.Backends.MiniAudio;
    using SoundFlow.Components;
    using SoundFlow.Enums;
    namespace CustomProcessing;
    internal static class Program
    {
    private static void Main(string[] args)
    {
    // Initialize the audio engine for recording.
    using var audioEngine = new MiniAudioEngine(44100, Capability.Record);
    // Create a Recorder instance with a custom processing callback.
    using var recorder = new Recorder(ProcessAudio, sampleRate: 44100);
    // Start recording.
    Console.WriteLine("Recording... Press any key to stop.");
    recorder.StartRecording();
    // Wait for the user to press a key.
    Console.ReadKey();
    // Stop recording.
    recorder.StopRecording();
    Console.WriteLine("Recording stopped.");
    }
    // This method will be called for each chunk of recorded audio.
    private static void ProcessAudio(Span<float> samples)
    {
    // Perform custom processing on the audio samples.
    // For example, calculate the average level:
    float sum = 0;
    for (int i = 0; i < samples.Length; i++)
    {
    sum += Math.Abs(samples[i]);
    }
    float averageLevel = sum / samples.Length;
    Console.WriteLine($"Average level: {averageLevel:F4}");
    }
    }
  4. Build and run the application:

    dotnet run

Explanation:

This code initializes the AudioEngine for recording and creates a Recorder instance, passing a custom ProcessAudio method as the AudioProcessCallback. This method will be called for each chunk of recorded audio samples. In this example, the ProcessAudio method simply calculates and prints the average level of the audio, but you can replace this with any custom processing logic.

3. VAD-based Recording

This tutorial demonstrates how to use the VoiceActivityDetector to automatically start and stop recording based on the presence of voice activity.

Steps:

  1. Create a new console application:

    dotnet new console -o VADRecording
    cd VADRecording
  2. Install the SoundFlow NuGet package:

    dotnet add package SoundFlow
  3. Replace the contents of Program.cs with the following code:

    using SoundFlow.Abstracts;
    using SoundFlow.Backends.MiniAudio;
    using SoundFlow.Components;
    using SoundFlow.Enums;
    namespace VADRecording;
    internal static class Program
    {
    private static void Main(string[] args)
    {
    // Initialize the audio engine for recording.
    using var audioEngine = new MiniAudioEngine(44100, Capability.Record);
    // Create a VoiceActivityDetector instance.
    var vad = new VoiceActivityDetector();
    // Create a Recorder instance and pass the VAD to the constructor.
    using var recorder = new Recorder("output.wav", sampleRate: 44100, vad: vad);
    // Subscribe to the SpeechDetected event (optional).
    vad.SpeechDetected += isDetected => Console.WriteLine($"Speech detected: {isDetected}");
    // Start recording. The Recorder will automatically pause and resume based on voice activity.
    Console.WriteLine("Recording with VAD... Press any key to stop.");
    recorder.StartRecording();
    // Wait for the user to press a key.
    Console.ReadKey();
    // Stop recording.
    recorder.StopRecording();
    Console.WriteLine("Recording stopped. Saved to output.wav");
    }
    }
  4. Build and run the application:

    dotnet run

Explanation:

This code initializes the AudioEngine for recording, creates a VoiceActivityDetector instance, and then creates a Recorder, passing the vad instance to its constructor. The Recorder will now use the VAD to automatically pause recording when silence is detected and resume when voice activity is detected. The SpeechDetected event is used to print messages to the console when the VAD’s state changes.

4. Microphone Playback

This tutorial demonstrates how to capture audio from the microphone and play it back in real time using the MicrophoneDataProvider.

Steps:

  1. Create a new console application:

    dotnet new console -o MicrophonePlayback
    cd MicrophonePlayback
  2. Install the SoundFlow NuGet package:

    dotnet add package SoundFlow
  3. Replace the contents of Program.cs with the following code:

    using SoundFlow.Abstracts;
    using SoundFlow.Backends.MiniAudio;
    using SoundFlow.Components;
    using SoundFlow.Enums;
    using SoundFlow.Providers;
    namespace MicrophonePlayback;
    internal static class Program
    {
    private static void Main(string[] args)
    {
    // Initialize the audio engine for mixed-mode operation (playback and recording).
    using var audioEngine = new MiniAudioEngine(44100, Capability.Mixed);
    // Create a MicrophoneDataProvider instance.
    using var microphoneDataProvider = new MicrophoneDataProvider();
    // Create a SoundPlayer and connect the MicrophoneDataProvider.
    var player = new SoundPlayer(microphoneDataProvider);
    // Add the player to the master mixer.
    Mixer.Master.AddComponent(player);
    // Start capturing audio from the microphone.
    microphoneDataProvider.StartCapture();
    // Start playback.
    player.Play();
    Console.WriteLine("Playing live microphone audio... Press any key to stop.");
    Console.ReadKey();
    // Stop playback and capture.
    player.Stop();
    microphoneDataProvider.StopCapture();
    // Remove the player from the mixer.
    Mixer.Master.RemoveComponent(player);
    }
    }
  4. Build and run the application:

    dotnet run

Explanation:

This code initializes the AudioEngine in mixed mode (to allow both playback and recording simultaneously), creates a MicrophoneDataProvider to capture audio from the microphone, creates a SoundPlayer that uses the MicrophoneDataProvider as its audio source, adds the player to the Master mixer, starts capturing microphone input, and then starts playback. The audio captured from the microphone will be played back in real-time through the default audio output device.

Effects

1. Reverb

This tutorial demonstrates how to apply a reverb effect to an audio stream using the AlgorithmicReverbModifier.

Steps:

  1. Create a new console application:

    dotnet new console -o ReverbEffect
    cd ReverbEffect
  2. Install the SoundFlow NuGet package:

    dotnet add package SoundFlow
  3. Replace the contents of Program.cs with the following code:

    using SoundFlow.Abstracts;
    using SoundFlow.Backends.MiniAudio;
    using SoundFlow.Components;
    using SoundFlow.Enums;
    using SoundFlow.Modifiers;
    using SoundFlow.Providers;
    namespace ReverbEffect;
    internal static class Program
    {
    private static void Main(string[] args)
    {
    // Initialize the audio engine.
    using var audioEngine = new MiniAudioEngine(44100, Capability.Playback);
    // Create a SoundPlayer and load an audio file.
    var player = new SoundPlayer(new StreamDataProvider(File.OpenRead("path/to/your/audiofile.wav")));
    // Create an AlgorithmicReverbModifier.
    var reverb = new AlgorithmicReverbModifier
    {
    RoomSize = 0.8f,
    Damp = 0.5f,
    Wet = 0.3f,
    Dry = 0.7f,
    Width = 1f
    };
    // Add the reverb modifier to the player.
    player.AddModifier(reverb);
    // Add the player to the master mixer.
    Mixer.Master.AddComponent(player);
    // Start playback.
    player.Play();
    // Keep the console application running until playback finishes or the user presses a key.
    Console.WriteLine("Playing audio with reverb... Press any key to stop.");
    Console.ReadKey();
    // Stop playback.
    player.Stop();
    // Remove the player from the mixer.
    Mixer.Master.RemoveComponent(player);
    }
    }
  4. Replace "path/to/your/audiofile.wav" with the actual path to an audio file.

  5. Build and run the application:

    dotnet run

Explanation:

This code initializes the AudioEngine, creates a SoundPlayer, loads an audio file, creates an AlgorithmicReverbModifier with custom settings, adds the modifier to the player, adds the player to the Master mixer, and starts playback. You will hear the audio with the reverb effect applied. Experiment with different values for RoomSize, Damp, Wet, Dry, and Width to change the characteristics of the reverb.

2. Equalization

This tutorial demonstrates how to use the ParametricEqualizer to adjust the frequency balance of an audio stream.

Steps:

  1. Create a new console application:

    dotnet new console -o Equalization
    cd Equalization
  2. Install the SoundFlow NuGet package:

    dotnet add package SoundFlow
  3. Replace the contents of Program.cs with the following code:

    using SoundFlow.Abstracts;
    using SoundFlow.Backends.MiniAudio;
    using SoundFlow.Components;
    using SoundFlow.Enums;
    using SoundFlow.Modifiers;
    using SoundFlow.Providers;
    namespace Equalization;
    internal static class Program
    {
    private static void Main(string[] args)
    {
    // Initialize the audio engine.
    using var audioEngine = new MiniAudioEngine(44100, Capability.Playback);
    // Create a SoundPlayer and load an audio file.
    var player = new SoundPlayer(new StreamDataProvider(File.OpenRead("path/to/your/audiofile.wav")));
    // Create a ParametricEqualizer.
    var equalizer = new ParametricEqualizer(AudioEngine.Channels);
    // Add some equalizer bands:
    // Boost low frequencies (bass)
    equalizer.AddBand(FilterType.LowShelf, 100, 6, 0.7f, 0);
    // Cut mid frequencies
    equalizer.AddBand(FilterType.Peaking, 1000, -4, 2, 0);
    // Boost high frequencies (treble)
    equalizer.AddBand(FilterType.HighShelf, 10000, 5, 0.7f, 0);
    // Add the equalizer to the player.
    player.AddModifier(equalizer);
    // Add the player to the master mixer.
    Mixer.Master.AddComponent(player);
    // Start playback.
    player.Play();
    // Keep the console application running until playback finishes or the user presses a key.
    Console.WriteLine("Playing audio with equalization... Press any key to stop.");
    Console.ReadKey();
    // Stop playback.
    player.Stop();
    // Remove the player from the mixer.
    Mixer.Master.RemoveComponent(player);
    }
    }
  4. Replace "path/to/your/audiofile.wav" with the actual path to an audio file.

  5. Build and run the application:

    dotnet run

Explanation:

This code initializes the AudioEngine, creates a SoundPlayer, loads an audio file, creates a ParametricEqualizer, adds three equalizer bands (low-shelf boost, peaking cut, high-shelf boost), adds the equalizer to the player, adds the player to the Master mixer, and starts playback. You will hear the audio with the equalization applied.

Experiment with different filter types (FilterType), frequencies, gain values, and Q values to shape the sound to your liking.

3. Chorus and Delay

This tutorial demonstrates how to apply chorus and delay effects using the ChorusModifier and DelayModifier.

Steps:

  1. Create a new console application:

    dotnet new console -o ChorusDelay
    cd ChorusDelay
  2. Install the SoundFlow NuGet package:

    dotnet add package SoundFlow
  3. Replace the contents of Program.cs with the following code:

    using SoundFlow.Abstracts;
    using SoundFlow.Backends.MiniAudio;
    using SoundFlow.Components;
    using SoundFlow.Enums;
    using SoundFlow.Modifiers;
    using SoundFlow.Providers;
    namespace ChorusDelay;
    internal static class Program
    {
    private static void Main(string[] args)
    {
    // Initialize the audio engine.
    using var audioEngine = new MiniAudioEngine(44100, Capability.Playback);
    // Create a SoundPlayer and load an audio file.
    var player = new SoundPlayer(new StreamDataProvider(File.OpenRead("path/to/your/audiofile.wav")));
    // Create a ChorusModifier.
    var chorus = new ChorusModifier(
    depth: 25f, // Depth (in milliseconds)
    rate: 0.8f, // Rate (in Hz)
    feedback: 0.5f, // Feedback amount
    wetDryMix: 0.5f, // Wet/dry mix (0 = dry, 1 = wet)
    maxDelayLength: 500 // Maximum delay length (in milliseconds)
    );
    // Create a DelayModifier.
    var delay = new DelayModifier(
    delayLength: 500, // Delay length (in milliseconds)
    feedback: 0.6f, // Feedback amount
    wetMix: 0.4f, // Wet/dry mix
    cutoffFrequency: 4000 // Cutoff frequency for the low-pass filter
    );
    // Add the chorus and delay modifiers to the player.
    player.AddModifier(chorus);
    player.AddModifier(delay);
    // Add the player to the master mixer.
    Mixer.Master.AddComponent(player);
    // Start playback.
    player.Play();
    // Keep the console application running until playback finishes or the user presses a key.
    Console.WriteLine("Playing audio with chorus and delay... Press any key to stop.");
    Console.ReadKey();
    // Stop playback.
    player.Stop();
    // Remove the player from the mixer.
    Mixer.Master.RemoveComponent(player);
    }
    }
  4. Replace "path/to/your/audiofile.wav" with the actual path to an audio file.

  5. Build and run the application:

    dotnet run

Explanation:

This code initializes the AudioEngine, creates a SoundPlayer, loads an audio file, creates ChorusModifier and DelayModifier instances with custom settings, adds both modifiers to the player (they will be applied in the order they are added), adds the player to the Master mixer, and starts playback. You will hear the audio with both chorus and delay effects applied.

Experiment with different parameter values for the chorus and delay modifiers to create a wide range of sonic textures.

4. Compression

This tutorial demonstrates how to use the CompressorModifier to reduce the dynamic range of an audio stream.

Steps:

  1. Create a new console application:

    dotnet new console -o Compression
    cd Compression
  2. Install the SoundFlow NuGet package:

    dotnet add package SoundFlow
  3. Replace the contents of Program.cs with the following code:

    using SoundFlow.Abstracts;
    using SoundFlow.Backends.MiniAudio;
    using SoundFlow.Components;
    using SoundFlow.Enums;
    using SoundFlow.Modifiers;
    using SoundFlow.Providers;
    namespace Compression;
    internal static class Program
    {
    private static void Main(string[] args)
    {
    // Initialize the audio engine.
    using var audioEngine = new MiniAudioEngine(44100, Capability.Playback);
    // Create a SoundPlayer and load an audio file.
    var player = new SoundPlayer(new StreamDataProvider(File.OpenRead("path/to/your/audiofile.wav")));
    // Create a CompressorModifier.
    var compressor = new CompressorModifier(
    threshold: -20f, // Threshold (in dB)
    ratio: 4f, // Compression ratio
    attack: 10f, // Attack time (in milliseconds)
    release: 100f, // Release time (in milliseconds)
    knee: 5f, // Knee width (in dB)
    makeupGain: 6f // Makeup gain (in dB)
    );
    // Add the compressor to the player.
    player.AddModifier(compressor);
    // Add the player to the master mixer.
    Mixer.Master.AddComponent(player);
    // Start playback.
    player.Play();
    // Keep the console application running until playback finishes or the user presses a key.
    Console.WriteLine("Playing audio with compression... Press any key to stop.");
    Console.ReadKey();
    // Stop playback.
    player.Stop();
    // Remove the player from the mixer.
    Mixer.Master.RemoveComponent(player);
    }
    }
  4. Replace "path/to/your/audiofile.wav" with the actual path to an audio file.

  5. Build and run the application:

    dotnet run

Explanation:

This code initializes the AudioEngine, creates a SoundPlayer, loads an audio file, creates a CompressorModifier with specific settings, adds the compressor to the player, adds the player to the Master mixer, and starts playback. You will hear the audio with the compression effect applied, resulting in a more consistent volume level.

Experiment with different values for threshold, ratio, attack, release, knee, and makeupGain to understand how they affect the compression.

5. Noise Reduction

This tutorial demonstrates how to use the NoiseReductionModifier to reduce noise in an audio stream.

Steps:

  1. Create a new console application:

    dotnet new console -o NoiseReduction
    cd NoiseReduction
  2. Install the SoundFlow NuGet package:

    dotnet add package SoundFlow
  3. Replace the contents of Program.cs with the following code:

    using SoundFlow.Abstracts;
    using SoundFlow.Backends.MiniAudio;
    using SoundFlow.Components;
    using SoundFlow.Enums;
    using SoundFlow.Modifiers;
    using SoundFlow.Providers;
    namespace NoiseReduction;
    internal static class Program
    {
    private static void Main(string[] args)
    {
    // Initialize the audio engine.
    using var audioEngine = new MiniAudioEngine(44100, Capability.Playback);
    // Create a SoundPlayer and load a noisy audio file.
    var player = new SoundPlayer(new StreamDataProvider(File.OpenRead("path/to/your/noisy/audiofile.wav")));
    // Create a NoiseReductionModifier.
    var noiseReducer = new NoiseReductionModifier(
    fftSize: 2048, // FFT size (power of 2)
    alpha: 3f, // Smoothing factor for noise estimation
    beta: 0.001f, // Minimum gain for noise reduction
    smoothingFactor: 0.8f, // Smoothing factor for gain
    gain: 1.0f, // Post-processing gain
    noiseFrames: 10 // Number of initial frames to use for noise estimation
    );
    // Add the noise reducer to the player.
    player.AddModifier(noiseReducer);
    // Add the player to the master mixer.
    Mixer.Master.AddComponent(player);
    // Start playback.
    player.Play();
    // Keep the console application running until playback finishes or the user presses a key.
    Console.WriteLine("Playing audio with noise reduction... Press any key to stop.");
    Console.ReadKey();
    // Stop playback.
    player.Stop();
    // Remove the player from the mixer.
    Mixer.Master.RemoveComponent(player);
    }
    }
  4. Replace "path/to/your/noisy/audiofile.wav" with the actual path to a noisy audio file.

  5. Build and run the application:

    dotnet run

Explanation:

This code initializes the AudioEngine, creates a SoundPlayer, loads a noisy audio file, creates a NoiseReductionModifier with specific settings, adds the noise reducer to the player, adds the player to the Master mixer, and starts playback. You should hear a reduction in the noise level of the audio.

Experiment with different values for fftSize, alpha, beta, smoothingFactor, gain, and noiseFrames to fine-tune the noise reduction.

6. Mixing

This tutorial demonstrates how to use the Mixer to combine multiple audio sources.

Steps:

  1. Create a new console application:

    dotnet new console -o Mixing
    cd Mixing
  2. Install the SoundFlow NuGet package:

    dotnet add package SoundFlow
  3. Replace the contents of Program.cs with the following code:

    using SoundFlow.Abstracts;
    using SoundFlow.Backends.MiniAudio;
    using SoundFlow.Components;
    using SoundFlow.Enums;
    using SoundFlow.Providers;
    namespace Mixing;
    internal static class Program
    {
    private static void Main(string[] args)
    {
    // Initialize the audio engine.
    using var audioEngine = new MiniAudioEngine(44100, Capability.Playback);
    // Create two SoundPlayer instances and load different audio files.
    var player1 = new SoundPlayer(new StreamDataProvider(File.OpenRead("path/to/your/audiofile1.wav")));
    var player2 = new SoundPlayer(new StreamDataProvider(File.OpenRead("path/to/your/audiofile2.wav")));
    // Create an Oscillator that generates a sine wave.
    var oscillator = new Oscillator
    {
    Frequency = 440, // 440 Hz (A4 note)
    Amplitude = 0.5f,
    Type = Oscillator.WaveformType.Sine
    };
    // Add the players and the oscillator to the master mixer.
    Mixer.Master.AddComponent(player1);
    Mixer.Master.AddComponent(player2);
    Mixer.Master.AddComponent(oscillator);
    // Start playback for both players.
    player1.Play();
    player2.Play();
    // Keep the console application running until the user presses a key.
    Console.WriteLine("Playing mixed audio... Press any key to stop.");
    Console.ReadKey();
    // Stop playback for both players.
    player1.Stop();
    player2.Stop();
    // Remove the components from the mixer.
    Mixer.Master.RemoveComponent(player1);
    Mixer.Master.RemoveComponent(player2);
    Mixer.Master.RemoveComponent(oscillator);
    }
    }
  4. Replace "path/to/your/audiofile1.wav" and "path/to/your/audiofile2.wav" with the actual paths to two different audio files.

  5. Build and run the application:

    dotnet run

Explanation:

This code initializes the AudioEngine, creates two SoundPlayer instances, loads two different audio files, creates an Oscillator that generates a sine wave, adds all three components to the Master mixer, and starts playback for the players. You will hear the two audio files and the sine wave mixed together.

Experiment with adding more sound sources to the mixer and adjusting their individual volumes and panning using the Volume and Pan properties of each SoundComponent.

Analysis

1. Level Metering

This tutorial demonstrates how to use the LevelMeterAnalyzer to measure the RMS (root mean square) and peak levels of an audio stream.

Steps:

  1. Create a new console application:

    dotnet new console -o LevelMetering
    cd LevelMetering
  2. Install the SoundFlow NuGet package:

    dotnet add package SoundFlow
  3. Replace the contents of Program.cs with the following code:

    using SoundFlow.Abstracts;
    using SoundFlow.Backends.MiniAudio;
    using SoundFlow.Components;
    using SoundFlow.Enums;
    using SoundFlow.Providers;
    using SoundFlow.Visualization;
    namespace LevelMetering;
    internal static class Program
    {
    private static void Main(string[] args)
    {
    // Initialize the audio engine.
    using var audioEngine = new MiniAudioEngine(44100, Capability.Playback);
    // Create a SoundPlayer and load an audio file.
    var player = new SoundPlayer(new StreamDataProvider(File.OpenRead("path/to/your/audiofile.wav")));
    // Create a LevelMeterAnalyzer.
    var levelMeter = new LevelMeterAnalyzer();
    // Connect the player's output to the level meter's input.
    player.ConnectOutput(levelMeter);
    // Add the player to the master mixer (the level meter doesn't produce output).
    Mixer.Master.AddComponent(player);
    // Start playback.
    player.Play();
    // Create a timer to periodically display the RMS and peak levels.
    var timer = new System.Timers.Timer(100); // Update every 100 milliseconds
    timer.Elapsed += (sender, e) =>
    {
    Console.WriteLine($"RMS Level: {levelMeter.Rms:F4}, Peak Level: {levelMeter.Peak:F4}");
    };
    timer.Start();
    // Keep the console application running until playback finishes or the user presses a key.
    Console.WriteLine("Playing audio and displaying level meter... Press any key to stop.");
    Console.ReadKey();
    // Stop playback and clean up.
    timer.Stop();
    player.Stop();
    Mixer.Master.RemoveComponent(player);
    }
    }
  4. Replace "path/to/your/audiofile.wav" with the actual path to an audio file.

  5. Build and run the application:

    dotnet run

Explanation:

This code initializes the AudioEngine, creates a SoundPlayer, loads an audio file, creates a LevelMeterAnalyzer, connects the player’s output to the analyzer’s input, adds the player to the Master mixer, and starts playback. It then creates a timer that fires every 100 milliseconds, printing the current RMS and peak levels to the console.

2. Spectrum Analysis

This tutorial demonstrates how to use the SpectrumAnalyzer to analyze the frequency content of an audio stream using the Fast Fourier Transform (FFT).

Steps:

  1. Create a new console application:

    dotnet new console -o SpectrumAnalysis
    cd SpectrumAnalysis
  2. Install the SoundFlow NuGet package:

    dotnet add package SoundFlow
  3. Replace the contents of Program.cs with the following code:

    using SoundFlow.Abstracts;
    using SoundFlow.Backends.MiniAudio;
    using SoundFlow.Components;
    using SoundFlow.Enums;
    using SoundFlow.Providers;
    using SoundFlow.Visualization;
    namespace SpectrumAnalysis;
    internal static class Program
    {
    private static void Main(string[] args)
    {
    // Initialize the audio engine.
    using var audioEngine = new MiniAudioEngine(44100, Capability.Playback);
    // Create a SoundPlayer and load an audio file.
    var player = new SoundPlayer(new StreamDataProvider(File.OpenRead("path/to/your/audiofile.wav")));
    // Create a SpectrumAnalyzer with an FFT size of 2048.
    var spectrumAnalyzer = new SpectrumAnalyzer(fftSize: 2048);
    // Connect the player's output to the spectrum analyzer's input.
    player.ConnectOutput(spectrumAnalyzer);
    // Add the player to the master mixer (the spectrum analyzer doesn't produce output).
    Mixer.Master.AddComponent(player);
    // Start playback.
    player.Play();
    // Create a timer to periodically display the spectrum data.
    var timer = new System.Timers.Timer(100); // Update every 100 milliseconds
    timer.Elapsed += (sender, e) =>
    {
    // Get the spectrum data from the analyzer.
    var spectrumData = spectrumAnalyzer.SpectrumData;
    // Print the magnitude of the first few frequency bins.
    if (spectrumData.Length > 0)
    {
    Console.Write("Spectrum: ");
    for (int i = 0; i < Math.Min(10, spectrumData.Length); i++)
    {
    Console.Write($"{spectrumData[i]:F2} ");
    }
    Console.WriteLine();
    }
    };
    timer.Start();
    // Keep the console application running until playback finishes or the user presses a key.
    Console.WriteLine("Playing audio and displaying spectrum data... Press any key to stop.");
    Console.ReadKey();
    // Stop playback and clean up.
    timer.Stop();
    player.Stop();
    Mixer.Master.RemoveComponent(player);
    }
    }
  4. Replace "path/to/your/audiofile.wav" with the actual path to an audio file.

  5. Build and run the application:

    dotnet run

Explanation:

This code initializes the AudioEngine, creates a SoundPlayer, loads an audio file, creates a SpectrumAnalyzer with an FFT size of 2048, connects the player’s output to the analyzer’s input, adds the player to the Master mixer, and starts playback. It then creates a timer that fires every 100 milliseconds, printing the magnitude of the first 10 frequency bins of the spectrum data to the console.

3. Voice Activity Detection

This tutorial demonstrates how to use the VoiceActivityDetector to detect the presence of human voice in an audio stream.

Steps:

  1. Create a new console application:

    dotnet new console -o VoiceActivityDetection
    cd VoiceActivityDetection
  2. Install the SoundFlow NuGet package:

    dotnet add package SoundFlow
  3. Replace the contents of Program.cs with the following code:

    using SoundFlow.Abstracts;
    using SoundFlow.Backends.MiniAudio;
    using SoundFlow.Components;
    using SoundFlow.Enums;
    using SoundFlow.Providers;
    namespace VoiceActivityDetection;
    internal static class Program
    {
    private static void Main(string[] args)
    {
    // Initialize the audio engine (either for playback or recording).
    using var audioEngine = new MiniAudioEngine(44100, Capability.Playback); // Or Capability.Record for microphone input
    // Create a SoundPlayer and load an audio file (if using playback).
    // If using recording, you don't need a SoundPlayer.
    var player = new SoundPlayer(new StreamDataProvider(File.OpenRead("path/to/your/audiofile.wav")));
    // Create a VoiceActivityDetector.
    var vad = new VoiceActivityDetector();
    // Connect the player's output (or microphone input) to the VAD's input.
    player.ConnectOutput(vad); // Or use Microphone.Default.ConnectOutput(vad) for recording
    // Subscribe to the SpeechDetected event.
    vad.SpeechDetected += isDetected => Console.WriteLine($"Speech detected: {isDetected}");
    // Add the player to the master mixer (if using playback).
    Mixer.Master.AddComponent(player);
    // Start playback (if using playback).
    player.Play();
    // If using recording, you would typically start a Recorder here instead of a SoundPlayer.
    // Keep the console application running until the user presses a key.
    Console.WriteLine("Analyzing audio for voice activity... Press any key to stop.");
    Console.ReadKey();
    // Stop playback (if using playback) and clean up.
    player.Stop();
    Mixer.Master.RemoveComponent(player);
    }
    }
  4. Replace "path/to/your/audiofile.wav" with the actual path to an audio file (if using playback).

  5. Build and run the application:

    dotnet run

Explanation:

This code initializes the AudioEngine (either for playback or recording), creates a SoundPlayer and loads an audio file (if using playback), creates a VoiceActivityDetector, connects the player’s output (or microphone input) to the VAD, subscribes to the SpeechDetected event to print messages to the console when speech is detected or not detected, adds the player to the Master mixer (if using playback), and starts playback.

If you want to use the microphone for input instead of a SoundPlayer, you would need to create a custom SoundComponent that captures audio from the microphone and connects its output to the VAD.

Visualization

1. Level Meter

This tutorial demonstrates how to create a simple console-based level meter using the LevelMeterAnalyzer and LevelMeterVisualizer.

Steps:

  1. Create a new console application:

    dotnet new console -o LevelMeterVisualization
    cd LevelMeterVisualization
  2. Install the SoundFlow NuGet package:

    dotnet add package SoundFlow
  3. Replace the contents of Program.cs with the following code:

    using SoundFlow.Abstracts;
    using SoundFlow.Backends.MiniAudio;
    using SoundFlow.Components;
    using SoundFlow.Enums;
    using SoundFlow.Providers;
    using SoundFlow.Visualization;
    using System.Diagnostics;
    namespace LevelMeterVisualization;
    internal static class Program
    {
    private static void Main(string[] args)
    {
    // Initialize the audio engine.
    using var audioEngine = new MiniAudioEngine(44100, Capability.Playback);
    // Create a SoundPlayer and load an audio file.
    var player = new SoundPlayer(new StreamDataProvider(File.OpenRead("path/to/your/audiofile.wav")));
    // Create a LevelMeterAnalyzer.
    var levelMeterAnalyzer = new LevelMeterAnalyzer();
    // Create a LevelMeterVisualizer.
    var levelMeterVisualizer = new LevelMeterVisualizer(levelMeterAnalyzer);
    // Connect the player's output to the level meter analyzer's input.
    player.ConnectOutput(levelMeterAnalyzer);
    // Add the player to the master mixer (the level meter analyzer doesn't produce output).
    Mixer.Master.AddComponent(player);
    // Start playback.
    player.Play();
    // Subscribe to the VisualizationUpdated event to trigger a redraw.
    levelMeterVisualizer.VisualizationUpdated += (sender, e) =>
    {
    DrawLevelMeter(levelMeterAnalyzer.Rms, levelMeterAnalyzer.Peak);
    };
    // Start a timer to update the visualization.
    var stopwatch = new Stopwatch();
    stopwatch.Start();
    var timer = new System.Timers.Timer(1000 / 60); // Update at approximately 60 FPS
    timer.Elapsed += (sender, e) =>
    {
    levelMeterVisualizer.ProcessOnAudioData(Array.Empty<float>());
    levelMeterVisualizer.Render(new ConsoleVisualizationContext());
    };
    timer.Start();
    // Keep the console application running until playback finishes or the user presses a key.
    Console.WriteLine("Playing audio and displaying level meter... Press any key to stop.");
    Console.ReadKey();
    // Stop playback and clean up.
    timer.Stop();
    player.Stop();
    Mixer.Master.RemoveComponent(player);
    levelMeterVisualizer.Dispose();
    }
    // Helper method to draw a simple console-based level meter.
    private static void DrawLevelMeter(float rms, float peak)
    {
    int barLength = (int)(rms * 40); // Scale the RMS value to a bar length
    int peakBarLength = (int)(peak * 40); // Scale the peak value to a bar length
    Console.SetCursorPosition(0, 0);
    Console.Write("RMS: ");
    Console.Write(new string('#', barLength));
    Console.Write(new string(' ', 40 - barLength));
    Console.Write("|\n");
    Console.SetCursorPosition(0, 1);
    Console.Write("Peak: ");
    Console.Write(new string('#', peakBarLength));
    Console.Write(new string(' ', 40 - peakBarLength));
    Console.Write("|");
    Console.SetCursorPosition(0, 3);
    }
    }
    // Simple IVisualizationContext implementation for console output.
    public class ConsoleVisualizationContext : IVisualizationContext
    {
    public void Clear()
    {
    // No need to clear the console in this example.
    }
    public void DrawLine(float x1, float y1, float x2, float y2, Color color, float thickness = 1)
    {
    // Simple line drawing not implemented for this example.
    }
    public void DrawRectangle(float x, float y, float width, float height, Color color)
    {
    // Simple rectangle drawing not implemented for this example.
    }
    }
  4. Replace "path/to/your/audiofile.wav" with the actual path to an audio file.

  5. Build and run the application:

    dotnet run

Explanation:

This code initializes the AudioEngine, creates a SoundPlayer, loads an audio file, creates a LevelMeterAnalyzer and a LevelMeterVisualizer, connects the player’s output to the analyzer, adds the player to the Master mixer, and starts playback. It then subscribes to the VisualizationUpdated event of the visualizer to redraw the level meter when the data changes. Finally, it starts a timer that calls ProcessOnAudioData and Render on the visualizer approximately 60 times per second. The DrawLevelMeter method is a helper function that draws a simple console-based level meter using # characters.

2. Waveform

This tutorial demonstrates how to use the WaveformVisualizer to display the waveform of an audio stream.

Steps:

  1. Create a new console application:

    dotnet new console -o WaveformVisualization
    cd WaveformVisualization
  2. Install the SoundFlow NuGet package:

    dotnet add package SoundFlow
  3. Replace the contents of Program.cs with the following code:

    using SoundFlow.Abstracts;
    using SoundFlow.Backends.MiniAudio;
    using SoundFlow.Components;
    using SoundFlow.Enums;
    using SoundFlow.Providers;
    using SoundFlow.Visualization;
    using System.Diagnostics;
    namespace WaveformVisualization;
    internal static class Program
    {
    private static void Main(string[] args)
    {
    // Initialize the audio engine.
    using var audioEngine = new MiniAudioEngine(44100, Capability.Playback);
    // Create a SoundPlayer and load an audio file.
    var player = new SoundPlayer(new StreamDataProvider(File.OpenRead("path/to/your/audiofile.wav")));
    // Create a WaveformVisualizer.
    var waveformVisualizer = new WaveformVisualizer();
    // Add the player to the master mixer.
    Mixer.Master.AddComponent(player);
    // Start playback.
    player.Play();
    // Subscribe to the VisualizationUpdated event to trigger a redraw.
    waveformVisualizer.VisualizationUpdated += (sender, e) =>
    {
    DrawWaveform(waveformVisualizer.Waveform);
    };
    AudioEngine.OnAudioProcessed += waveformVisualizer.ProcessOnAudioData;
    // Keep the console application running until playback finishes or the user presses a key.
    Console.WriteLine("Playing audio and displaying waveform... Press any key to stop.");
    Console.ReadKey();
    // Stop playback and clean up.
    player.Stop();
    Mixer.Master.RemoveComponent(player);
    waveformVisualizer.Dispose();
    }
    // Helper method to draw a simple console-based waveform.
    private static void DrawWaveform(List<float> waveform)
    {
    Console.Clear();
    int consoleWidth = Console.WindowWidth;
    int consoleHeight = Console.WindowHeight;
    if (waveform.Count == 0)
    {
    return;
    }
    for (int i = 0; i < consoleWidth; i++)
    {
    // Calculate the index into the waveform data, mapping the console width to the waveform length.
    int waveformIndex = (int)(i * (waveform.Count / (float)consoleWidth));
    waveformIndex = Math.Clamp(waveformIndex, 0, waveform.Count - 1);
    // Normalize the waveform value to the console height.
    float sampleValue = waveform[waveformIndex];
    int consoleY = (int)((sampleValue + 1) * 0.5 * consoleHeight); // Map [-1, 1] to [0, consoleHeight]
    consoleY = Math.Clamp(consoleY, 0, consoleHeight - 1);
    // Draw a character at the calculated position.
    Console.SetCursorPosition(i, consoleHeight - consoleY - 1);
    Console.Write("*");
    }
    Console.SetCursorPosition(0, consoleHeight - 1);
    }
    }
  4. Replace "path/to/your/audiofile.wav" with the actual path to an audio file.

  5. Build and run the application:

    dotnet run

Explanation:

This code initializes the AudioEngine, creates a SoundPlayer, loads an audio file, creates a WaveformVisualizer, adds the player to the Master mixer, and starts playback. It subscribes to the VisualizationUpdated event of the visualizer to redraw the waveform when the data changes. The DrawWaveform method is a helper function that draws a simple console-based waveform using * characters. The AudioEngine.OnAudioProcessed is used to send chunks of processed audio data to the WaveformVisualizer.

3. Spectrum Analyzer

This tutorial demonstrates how to create a simple console-based spectrum analyzer using the SpectrumAnalyzer and SpectrumVisualizer.

Steps:

  1. Create a new console application:

    dotnet new console -o SpectrumAnalyzerVisualization
    cd SpectrumAnalyzerVisualization
  2. Install the SoundFlow NuGet package:

    dotnet add package SoundFlow
  3. Replace the contents of Program.cs with the following code:

    using SoundFlow.Abstracts;
    using SoundFlow.Backends.MiniAudio;
    using SoundFlow.Components;
    using SoundFlow.Enums;
    using SoundFlow.Providers;
    using SoundFlow.Visualization;
    using System.Diagnostics;
    namespace SpectrumAnalyzerVisualization;
    internal static class Program
    {
    private static void Main(string[] args)
    {
    // Initialize the audio engine.
    using var audioEngine = new MiniAudioEngine(44100, Capability.Playback);
    // Create a SoundPlayer and load an audio file.
    var player = new SoundPlayer(new StreamDataProvider(File.OpenRead("path/to/your/audiofile.wav")));
    // Create a SpectrumAnalyzer with an FFT size of 2048.
    var spectrumAnalyzer = new SpectrumAnalyzer(fftSize: 2048);
    // Create a SpectrumVisualizer.
    var spectrumVisualizer = new SpectrumVisualizer(spectrumAnalyzer);
    // Connect the player's output to the spectrum analyzer's input.
    player.ConnectOutput(spectrumAnalyzer);
    // Add the player to the master mixer (the spectrum analyzer doesn't produce output).
    Mixer.Master.AddComponent(player);
    // Start playback.
    player.Play();
    // Subscribe to the VisualizationUpdated event to trigger a redraw.
    spectrumVisualizer.VisualizationUpdated += (sender, e) =>
    {
    DrawSpectrum(spectrumAnalyzer.SpectrumData);
    };
    // Start a timer to update the visualization.
    var stopwatch = new Stopwatch();
    stopwatch.Start();
    var timer = new System.Timers.Timer(1000 / 60); // Update at approximately 60 FPS
    timer.Elapsed += (sender, e) =>
    {
    spectrumVisualizer.ProcessOnAudioData(Array.Empty<float>());
    spectrumVisualizer.Render(new ConsoleVisualizationContext());
    };
    timer.Start();
    // Keep the console application running until playback finishes or the user presses a key.
    Console.WriteLine("Playing audio and displaying spectrum analyzer... Press any key to stop.");
    Console.ReadKey();
    // Stop playback and clean up.
    timer.Stop();
    player.Stop();
    Mixer.Master.RemoveComponent(player);
    spectrumVisualizer.Dispose();
    }
    // Helper method to draw a simple console-based spectrum analyzer.
    private static void DrawSpectrum(ReadOnlySpan<float> spectrumData)
    {
    Console.Clear();
    int consoleWidth = Console.WindowWidth;
    int consoleHeight = Console.WindowHeight;
    if (spectrumData.IsEmpty)
    {
    return;
    }
    int barWidth = Math.Max(1, consoleWidth / spectrumData.Length); // Ensure at least 1 character per bar
    for (int i = 0; i < spectrumData.Length; i++)
    {
    // Scale the magnitude to the console height.
    float magnitude = spectrumData[i];
    int barHeight = (int)(magnitude * consoleHeight / 2); // Adjust scaling factor as needed
    barHeight = Math.Clamp(barHeight, 0, consoleHeight - 1);
    // Draw a vertical bar for each frequency bin.
    for (int j = 0; j < barHeight; j++)
    {
    for (int w = 0; w < barWidth; w++)
    {
    if (Console.CursorLeft < consoleWidth - 1) {
    Console.SetCursorPosition(i * barWidth + w, consoleHeight - 1 - j);
    Console.Write("*");
    }
    }
    }
    }
    Console.SetCursorPosition(0, consoleHeight - 1);
    }
    }
    // Simple IVisualizationContext implementation for console output.
    public class ConsoleVisualizationContext : IVisualizationContext
    {
    public void Clear()
    {
    // No need to clear the console in this example.
    }
    public void DrawLine(float x1, float y1, float x2, float y2, Color color, float thickness = 1f)
    {
    // Simple line drawing not implemented for this example.
    }
    public void DrawRectangle(float x, float y, float width, float height, Color color)
    {
    // Simple rectangle drawing not implemented for this example.
    }
    }
  4. Replace "path/to/your/audiofile.wav" with the actual path to an audio file.

  5. Build and run the application:

    dotnet run

Explanation:

This code initializes the AudioEngine, creates a SoundPlayer, loads an audio file, creates a SpectrumAnalyzer and a SpectrumVisualizer, connects the player’s output to the analyzer, adds the player to the Master mixer, and starts playback. It subscribes to the VisualizationUpdated event of the visualizer to redraw the spectrum when the data changes. The DrawSpectrum method is a helper function that draws a simple console-based spectrum analyzer using * characters. The height of each bar represents the magnitude of the corresponding frequency bin.

Integrating with UI Frameworks

These examples use basic console output for simplicity. To integrate SoundFlow’s visualizers with a GUI framework (like WPF, WinForms, Avalonia, or MAUI), you’ll need to:

  1. Create an IVisualizationContext implementation: This class will wrap the drawing primitives of your chosen UI framework. For example, in WPF, you might use DrawingContext methods to draw shapes on a Canvas.
  2. Update the UI from the VisualizationUpdated event: In the event handler, trigger a redraw of your UI element that hosts the visualization. Make sure to marshal the update to the UI thread using Dispatcher.Invoke or a similar mechanism if the event is raised from a different thread.
  3. Call the Render method: In your UI’s rendering logic, call the Render method of the visualizer, passing your IVisualizationContext implementation.

Example (Conceptual WPF):

// In your XAML:
// <Canvas x:Name="VisualizationCanvas" />
// In your code-behind:
public partial class MainWindow : Window
{
private readonly WaveformVisualizer _visualizer;
public MainWindow()
{
InitializeComponent();
// ... Initialize AudioEngine, SoundPlayer, etc. ...
_visualizer = new WaveformVisualizer();
_visualizer.VisualizationUpdated += OnVisualizationUpdated;
// ...
}
private void OnVisualizationUpdated(object? sender, EventArgs e)
{
// Marshal the update to the UI thread
Dispatcher.Invoke(() =>
{
VisualizationCanvas.Children.Clear(); // Clear previous drawing
// Create a custom IVisualizationContext that wraps the Canvas
var context = new WpfVisualizationContext(VisualizationCanvas);
// Render the visualization
_visualizer.Render(context);
});
}
// ...
}
// IVisualizationContext implementation for WPF
public class WpfVisualizationContext : IVisualizationContext
{
private readonly Canvas _canvas;
public WpfVisualizationContext(Canvas canvas)
{
_canvas = canvas;
}
public void Clear()
{
_canvas.Children.Clear();
}
public void DrawLine(float x1, float y1, float x2, float y2, Color color, float thickness = 1f)
{
var line = new Line
{
X1 = x1,
Y1 = y1,
X2 = x2,
Y2 = y2,
Stroke = new SolidColorBrush(System.Windows.Media.Color.FromArgb((byte)(color.A * 255), (byte)(color.R * 255), (byte)(color.G * 255), (byte)(color.B * 255))),
StrokeThickness = thickness
};
_canvas.Children.Add(line);
}
public void DrawRectangle(float x, float y, float width, float height, Color color)
{
var rect = new Rectangle
{
Width = width,
Height = height,
Fill = new SolidColorBrush(System.Windows.Media.Color.FromArgb((byte)(color.A * 255), (byte)(color.R * 255), (byte)(color.G * 255), (byte)(color.B * 255)))
};
Canvas.SetLeft(rect, x);
Canvas.SetTop(rect, y);
_canvas.Children.Add(rect);
}
}

Remember to adapt this conceptual example to your specific UI framework and project structure.

These tutorials and examples provide a starting point for using SoundFlow in your own audio applications. Explore the different components, modifiers, analyzers, and visualizers to create a wide range of audio processing and visualization solutions. Refer to the Core Concepts and API Reference sections of the Wiki for more detailed information about each class and interface.