Skip to content

Tutorials and Examples

Tutorials and Examples

This section provides a collection of tutorials and examples to help you learn how to use SoundFlow for various audio processing tasks. Each tutorial provides step-by-step instructions and explanations, while the examples offer ready-to-run code snippets that demonstrate specific features.

Playback

1. Basic Playback

This tutorial demonstrates how to play an audio file from disk using SoundPlayer and StreamDataProvider.

Steps:

  1. Create a new console application:

    dotnet new console -o BasicPlayback
    cd BasicPlayback
  2. Install the SoundFlow NuGet package:

    dotnet add package SoundFlow
  3. Replace the contents of Program.cs with the following code:

    using SoundFlow.Abstracts;
    using SoundFlow.Backends.MiniAudio;
    using SoundFlow.Components;
    using SoundFlow.Enums;
    using SoundFlow.Providers;
    using System.IO;
    namespace BasicPlayback;
    internal static class Program
    {
    private static void Main(string[] args)
    {
    // Initialize the audio engine with the MiniAudio backend.
    using var audioEngine = new MiniAudioEngine(48000, Capability.Playback);
    // Create a SoundPlayer and load an audio file.
    // Replace "path/to/your/audiofile.wav" with the actual path to your audio file.
    using var dataProvider = new StreamDataProvider(File.OpenRead("path/to/your/audiofile.wav"));
    var player = new SoundPlayer(dataProvider);
    // Add the player to the master mixer.
    Mixer.Master.AddComponent(player);
    // Start playback.
    player.Play();
    // Keep the console application running until playback finishes or the user presses a key.
    Console.WriteLine("Playing audio... Press any key to stop.");
    Console.ReadKey();
    // Stop playback.
    player.Stop();
    // Remove the player from the mixer.
    Mixer.Master.RemoveComponent(player);
    // dataProvider is disposed automatically due to 'using' statement.
    }
    }
  4. Replace "path/to/your/audiofile.wav" with the actual path to an audio file on your computer.

  5. Build and run the application:

    dotnet run

Explanation:

This code initializes the AudioEngine, creates a StreamDataProvider (which is IDisposable and managed by a using statement) to load an audio file, creates a SoundPlayer with this provider, adds the player to the Master mixer, and starts playback. The console application then waits for the user to press a key before stopping playback and cleaning up.

2. Web Playback (NetworkDataProvider)

This tutorial demonstrates how to play an audio stream from a URL using SoundPlayer and NetworkDataProvider.

Steps:

  1. Create a new console application:

    dotnet new console -o WebPlayback
    cd WebPlayback
  2. Install the SoundFlow NuGet package:

    dotnet add package SoundFlow
  3. Replace the contents of Program.cs with the following code:

    using SoundFlow.Abstracts;
    using SoundFlow.Backends.MiniAudio;
    using SoundFlow.Components;
    using SoundFlow.Enums;
    using SoundFlow.Providers;
    using System;
    using System.IO;
    using System.Threading.Tasks;
    namespace WebPlayback;
    internal static class Program
    {
    private static async Task Main(string[] args)
    {
    // Initialize the audio engine.
    using var audioEngine = new MiniAudioEngine(48000, Capability.Playback);
    // Create a NetworkDataProvider. Replace "your-audio-stream-url"
    // with the actual URL (direct audio file or HLS .m3u8 playlist).
    // NetworkDataProvider is IDisposable.
    using var dataProvider = new NetworkDataProvider("your-audio-stream-url");
    var player = new SoundPlayer(dataProvider);
    // Add the player to the master mixer.
    Mixer.Master.AddComponent(player);
    // Start playback.
    player.Play();
    // Keep the console application running until the user presses a key.
    Console.WriteLine("Playing stream... Press any key to stop.");
    Console.ReadKey();
    // Stop playback.
    player.Stop();
    // Remove the player from the mixer.
    Mixer.Master.RemoveComponent(player);
    // dataProvider is disposed automatically.
    }
    }
  4. Replace "your-audio-stream-url" with the actual URL of an audio stream (e.g., direct MP3/WAV or an HLS .m3u8 playlist).

  5. Build and run the application:

    dotnet run

Explanation:

This code initializes the AudioEngine, creates a NetworkDataProvider for the given URL (which handles direct files or HLS playlists), creates a SoundPlayer, adds it to the Master mixer, and starts playback. NetworkDataProvider is IDisposable and managed with a using statement.

3. Playback Control

This tutorial demonstrates how to control audio playback using Play, Pause, Stop, Seek, and PlaybackSpeed.

Steps:

  1. Create a new console application:

    dotnet new console -o PlaybackControl
    cd PlaybackControl
  2. Install the SoundFlow NuGet package:

    dotnet add package SoundFlow
  3. Replace the contents of Program.cs with the following code:

    using SoundFlow.Abstracts;
    using SoundFlow.Backends.MiniAudio;
    using SoundFlow.Components;
    using SoundFlow.Enums;
    using SoundFlow.Providers;
    using System;
    using System.IO;
    namespace PlaybackControl;
    internal static class Program
    {
    private static void Main(string[] args)
    {
    // Initialize the audio engine.
    using var audioEngine = new MiniAudioEngine(48000, Capability.Playback);
    // Create a SoundPlayer and load an audio file.
    using var dataProvider = new StreamDataProvider(File.OpenRead("path/to/your/audiofile.wav"));
    var player = new SoundPlayer(dataProvider) { Volume = 0.8f }; // Example: set initial volume
    // Add the player to the master mixer.
    Mixer.Master.AddComponent(player);
    // Start playback.
    player.Play();
    Console.WriteLine("Playing audio... (p: pause/play, s: seek, +/-: speed, v/m: volume, any other: stop)");
    // Handle user input for playback control.
    while (player.State != PlaybackState.Stopped)
    {
    var keyInfo = Console.ReadKey(true);
    switch (keyInfo.Key)
    {
    case ConsoleKey.P:
    if (player.State == PlaybackState.Playing) player.Pause();
    else player.Play();
    Console.WriteLine(player.State == PlaybackState.Paused ? "Paused" : "Playing");
    break;
    case ConsoleKey.S:
    Console.Write("Enter seek time (in seconds, e.g., 10.5): ");
    if (float.TryParse(Console.ReadLine(), out var seekTimeSeconds))
    {
    if (player.Seek(TimeSpan.FromSeconds(seekTimeSeconds)))
    Console.WriteLine($"Seeked to {seekTimeSeconds:F1}s. Current time: {player.Time:F1}s");
    else
    Console.WriteLine("Seek failed.");
    }
    else Console.WriteLine("Invalid seek time.");
    break;
    case ConsoleKey.OemPlus:
    case ConsoleKey.Add:
    player.PlaybackSpeed = Math.Min(2.0f, player.PlaybackSpeed + 0.1f);
    Console.WriteLine($"Playback speed: {player.PlaybackSpeed:F1}x");
    break;
    case ConsoleKey.OemMinus:
    case ConsoleKey.Subtract:
    player.PlaybackSpeed = Math.Max(0.1f, player.PlaybackSpeed - 0.1f);
    Console.WriteLine($"Playback speed: {player.PlaybackSpeed:F1}x");
    break;
    case ConsoleKey.V:
    player.Volume = Math.Min(1.5f, player.Volume + 0.1f); // Allow gain up to 150%
    Console.WriteLine($"Volume: {player.Volume:P0}");
    break;
    case ConsoleKey.M:
    player.Volume = Math.Max(0.0f, player.Volume - 0.1f);
    Console.WriteLine($"Volume: {player.Volume:P0}");
    break;
    default:
    player.Stop();
    Console.WriteLine("Stopped");
    break;
    }
    }
    // Remove the player from the mixer.
    Mixer.Master.RemoveComponent(player);
    }
    }
  4. Replace "path/to/your/audiofile.wav" with the actual path to an audio file.

  5. Build and run the application:

    dotnet run

Explanation:

This code initializes the AudioEngine, creates a SoundPlayer, adds it to the Master mixer, and starts playback. It then enters a loop that handles user input for playback control:

  • P: Pauses or resumes playback.
  • S: Prompts for a seek time (in seconds) and seeks using TimeSpan.FromSeconds(). The Seek method now returns a boolean indicating success.
  • +/-: Adjusts PlaybackSpeed.
  • V/M: Adjusts player.Volume.
  • Any other key: Stops playback.

4. Looping with Custom Loop Points

This tutorial demonstrates how to enable looping for a SoundPlayer and how to set custom loop points.

Steps:

  1. Create a new console application:

    dotnet new console -o LoopingPlayback
    cd LoopingPlayback
  2. Install the SoundFlow NuGet package:

    dotnet add package SoundFlow
  3. Replace the contents of Program.cs with the following code:

    using SoundFlow.Abstracts;
    using SoundFlow.Backends.MiniAudio;
    using SoundFlow.Components;
    using SoundFlow.Enums;
    using SoundFlow.Providers;
    using System;
    using System.IO;
    namespace LoopingPlayback;
    internal static class Program
    {
    private static void Main(string[] args)
    {
    // Initialize the audio engine.
    using var audioEngine = new MiniAudioEngine(48000, Capability.Playback);
    // Create a SoundPlayer and load an audio file.
    using var dataProvider = new StreamDataProvider(File.OpenRead("path/to/your/audiofile.wav"));
    var player = new SoundPlayer(dataProvider);
    // Enable looping.
    player.IsLooping = true;
    // **Optional: Set custom loop points**
    // Example 1: Loop from 2.5 seconds to 7.0 seconds (using float seconds)
    // player.SetLoopPoints(2.5f, 7.0f);
    // Example 2: Loop from sample 110250 to sample 308700 (using samples)
    // player.SetLoopPoints(110250, 308700); // Assuming 44.1kHz stereo, these are example values
    // Example 3: Loop from 1.5 seconds to the natural end of the audio (using TimeSpan, end point is optional)
    player.SetLoopPoints(TimeSpan.FromSeconds(1.5));
    // Add the player to the master mixer.
    Mixer.Master.AddComponent(player);
    // Start playback.
    player.Play();
    // Keep the console application running until the user presses a key.
    Console.WriteLine("Playing audio in a loop... Press any key to stop.");
    Console.ReadKey();
    // Stop playback.
    player.Stop();
    // Remove the player from the mixer.
    Mixer.Master.RemoveComponent(player);
    }
    }
  4. Replace "path/to/your/audiofile.wav" with the actual path to an audio file. Make sure the sample indices in Example 2 are valid for your file if you use that option.

  5. Build and run the application:

    dotnet run

Explanation:

This code builds upon the basic playback example and introduces audio looping.

  • player.IsLooping = true;: Enables looping.
  • player.SetLoopPoints(...): Configures the loop region.
    • Overloads accept float seconds, int samples, or TimeSpan.
    • If endTime (or endSample) is omitted or set to -1f (or -1), the loop goes to the natural end of the audio.

5. Surround Sound

This tutorial demonstrates how to use SurroundPlayer to play audio with surround sound configurations.

Steps:

  1. Create a new console application:

    dotnet new console -o SurroundPlayback
    cd SurroundPlayback
  2. Install the SoundFlow NuGet package:

    dotnet add package SoundFlow
  3. Replace the contents of Program.cs with the following code:

    using SoundFlow.Abstracts;
    using SoundFlow.Backends.MiniAudio;
    using SoundFlow.Components;
    using SoundFlow.Enums;
    using SoundFlow.Providers;
    using System.Numerics;
    using System.IO;
    namespace SurroundPlayback;
    internal static class Program
    {
    private static void Main(string[] args)
    {
    // Initialize the audio engine with appropriate channels for surround.
    // For 7.1, use 8 channels.
    using var audioEngine = new MiniAudioEngine(48000, Capability.Playback, channels: 8);
    // Create a SurroundPlayer. Load a mono or stereo file for surround upmixing,
    // or a multi-channel file if your source is already surround.
    // The SurroundPlayer will attempt to pan mono/stereo to the configured speakers.
    using var dataProvider = new StreamDataProvider(File.OpenRead("path/to/your/audiofile.wav")); // Can be mono/stereo
    var player = new SurroundPlayer(dataProvider);
    // Configure the SurroundPlayer for 7.1 surround sound.
    player.SpeakerConfig = SurroundPlayer.SpeakerConfiguration.Surround71;
    // Set the panning method (VBAP is often good for surround).
    player.Panning = SurroundPlayer.PanningMethod.Vbap;
    // Set the listener position (optional, (0,0) is center).
    player.ListenerPosition = new Vector2(0, 0);
    // Add the player to the master mixer.
    Mixer.Master.AddComponent(player);
    // Start playback.
    player.Play();
    // Keep the console application running until playback finishes or the user presses a key.
    Console.WriteLine("Playing surround sound audio... Press any key to stop.");
    Console.ReadKey();
    // Stop playback.
    player.Stop();
    // Remove the player from the mixer.
    Mixer.Master.RemoveComponent(player);
    }
    }
  4. Replace "path/to/your/audiofile.wav" with the actual path to an audio file (mono, stereo, or multi-channel).

  5. Build and run: dotnet run

Explanation:

This code initializes AudioEngine with 8 channels for 7.1. A SurroundPlayer is created. If the input audiofile.wav is mono or stereo, the SurroundPlayer will pan it across the configured 7.1 speaker layout. SpeakerConfig and Panning method are set. The ListenerPosition can also be adjusted.

6. Efficient Playback with ChunkedDataProvider

This tutorial demonstrates how to use the ChunkedDataProvider for efficient playback of large audio files.

Steps:

  1. Create a new console application:

    dotnet new console -o ChunkedPlayback
    cd ChunkedPlayback
  2. Install the SoundFlow NuGet package:

    dotnet add package SoundFlow
  3. Replace the contents of Program.cs with the following code:

    using SoundFlow.Abstracts;
    using SoundFlow.Backends.MiniAudio;
    using SoundFlow.Components;
    using SoundFlow.Enums;
    using SoundFlow.Providers;
    using System.IO;
    namespace ChunkedPlayback;
    internal static class Program
    {
    private static void Main(string[] args)
    {
    // Initialize the audio engine.
    using var audioEngine = new MiniAudioEngine(48000, Capability.Playback);
    // Create a ChunkedDataProvider and load a large audio file.
    // Replace "path/to/your/large/audiofile.wav" with the actual path.
    using var dataProvider = new ChunkedDataProvider("path/to/your/large/audiofile.wav");
    var player = new SoundPlayer(dataProvider);
    Mixer.Master.AddComponent(player);
    player.Play();
    Console.WriteLine("Playing audio with ChunkedDataProvider... Press any key to stop.");
    Console.ReadKey();
    player.Stop();
    Mixer.Master.RemoveComponent(player);
    }
    }
  4. Replace "path/to/your/large/audiofile.wav" with the path to a large audio file.

  5. Build and run: dotnet run

Explanation: The ChunkedDataProvider reads and decodes audio in chunks, suitable for large files. It’s IDisposable and managed with using.

Recording

1. Basic Recording

This tutorial demonstrates how to record audio from the default recording device and save it to a WAV file.

Steps:

  1. Create a new console application:

    dotnet new console -o BasicRecording
    cd BasicRecording
  2. Install the SoundFlow NuGet package:

    dotnet add package SoundFlow
  3. Replace the contents of Program.cs with the following code:

    using SoundFlow.Abstracts;
    using SoundFlow.Backends.MiniAudio;
    using SoundFlow.Components;
    using SoundFlow.Enums;
    using System;
    using System.IO;
    namespace BasicRecording;
    internal static class Program
    {
    private static void Main(string[] args)
    {
    // Initialize for recording, e.g., 48kHz.
    using var audioEngine = new MiniAudioEngine(48000, Capability.Record);
    string outputFilePath = Path.Combine(Directory.GetCurrentDirectory(), "output.wav");
    using var fileStream = new FileStream(outputFilePath, FileMode.Create, FileAccess.Write, FileShare.None);
    using var recorder = new Recorder(fileStream, sampleRate: 48000, encodingFormat: EncodingFormat.Wav);
    Console.WriteLine("Recording... Press any key to stop.");
    recorder.StartRecording();
    Console.ReadKey();
    recorder.StopRecording();
    Console.WriteLine($"Recording stopped. Saved to {outputFilePath}");
    }
    }
  4. Build and run: dotnet run

Explanation: Initializes AudioEngine for recording, creates a Recorder to save to “output.wav”, starts recording, waits for a key, then stops.

2. Recording with Custom Processing

This tutorial demonstrates using a callback to process recorded audio in real-time.

Steps:

  1. Create a new console application and install SoundFlow.

  2. Replace Program.cs:

    using SoundFlow.Abstracts;
    using SoundFlow.Backends.MiniAudio;
    using SoundFlow.Components;
    using SoundFlow.Enums;
    using System;
    using System.IO;
    namespace CustomProcessing;
    internal static class Program
    {
    private static void Main(string[] args)
    {
    using var audioEngine = new MiniAudioEngine(48000, Capability.Record);
    using var recorder = new Recorder(ProcessAudio, sampleRate: 48000);
    Console.WriteLine("Recording with custom processing... Press any key to stop.");
    recorder.StartRecording();
    Console.ReadKey();
    recorder.StopRecording();
    Console.WriteLine("Recording stopped.");
    }
    // This method will be called for each chunk of recorded audio.
    private static void ProcessAudio(Span<float> samples)
    {
    // Perform custom processing on the audio samples.
    // For example, calculate the average level:
    float sum = 0;
    for (int i = 0; i < samples.Length; i++)
    {
    sum += Math.Abs(samples[i]);
    }
    float averageLevel = sum / samples.Length;
    Console.WriteLine($"Average level: {averageLevel:F4}");
    }
    }
  3. Build and run: dotnet run

Explanation: A Recorder is created with a ProcessAudio callback that gets called with chunks of recorded audio.

3. Microphone Playback (Loopback/Monitor)

This tutorial demonstrates capturing microphone audio and playing it back in real-time.

Steps:

  1. Create a new console application and install SoundFlow.

  2. Replace Program.cs:

    using SoundFlow.Abstracts;
    using SoundFlow.Backends.MiniAudio;
    using SoundFlow.Components;
    using SoundFlow.Enums;
    using SoundFlow.Providers;
    using System;
    namespace MicrophonePlayback;
    internal static class Program
    {
    private static void Main(string[] args)
    {
    // Mixed capability for simultaneous record & playback.
    using var audioEngine = new MiniAudioEngine(48000, Capability.Mixed);
    using var microphoneDataProvider = new MicrophoneDataProvider();
    // Create a SoundPlayer and connect the MicrophoneDataProvider.
    var player = new SoundPlayer(microphoneDataProvider);
    // Add the player to the master mixer.
    Mixer.Master.AddComponent(player);
    // Start capturing audio from the microphone.
    microphoneDataProvider.StartCapture();
    // Start playback.
    player.Play();
    Console.WriteLine("Playing live microphone audio... Press any key to stop.");
    Console.ReadKey();
    // Stop playback and capture.
    player.Stop();
    microphoneDataProvider.StopCapture(); // Stop capture before provider is disposed by 'using'
    Mixer.Master.RemoveComponent(player);
    }
    }
  3. Build and run: dotnet run

Explanation: Uses MicrophoneDataProvider as a source for a SoundPlayer to achieve real-time microphone monitoring. AudioEngine needs Capability.Mixed.

Effects

1. Reverb

This tutorial demonstrates how to apply a reverb effect to an audio stream using the AlgorithmicReverbModifier.

Steps:

  1. Create a new console application:

    dotnet new console -o ReverbEffect
    cd ReverbEffect
  2. Install the SoundFlow NuGet package:

    dotnet add package SoundFlow
  3. Replace the contents of Program.cs with the following code:

    using SoundFlow.Abstracts;
    using SoundFlow.Backends.MiniAudio;
    using SoundFlow.Components;
    using SoundFlow.Enums;
    using SoundFlow.Modifiers;
    using SoundFlow.Providers;
    namespace ReverbEffect;
    internal static class Program
    {
    private static void Main(string[] args)
    {
    // Initialize the audio engine.
    using var audioEngine = new MiniAudioEngine(44100, Capability.Playback);
    // Create a SoundPlayer and load an audio file.
    var player = new SoundPlayer(new StreamDataProvider(File.OpenRead("path/to/your/audiofile.wav")));
    // Create an AlgorithmicReverbModifier.
    var reverb = new AlgorithmicReverbModifier
    {
    RoomSize = 0.8f,
    Damp = 0.5f,
    Wet = 0.3f,
    Dry = 0.7f,
    Width = 1f
    };
    // Add the reverb modifier to the player.
    player.AddModifier(reverb);
    // Add the player to the master mixer.
    Mixer.Master.AddComponent(player);
    // Start playback.
    player.Play();
    // Keep the console application running until playback finishes or the user presses a key.
    Console.WriteLine("Playing audio with reverb... Press any key to stop.");
    Console.ReadKey();
    // Stop playback.
    player.Stop();
    // Remove the player from the mixer.
    Mixer.Master.RemoveComponent(player);
    }
    }
  4. Replace "path/to/your/audiofile.wav" with the actual path to an audio file.

  5. Build and run the application:

    dotnet run

Explanation:

This code initializes the AudioEngine, creates a SoundPlayer, loads an audio file, creates an AlgorithmicReverbModifier with custom settings, adds the modifier to the player, adds the player to the Master mixer, and starts playback. You will hear the audio with the reverb effect applied. Experiment with different values for RoomSize, Damp, Wet, Dry, and Width to change the characteristics of the reverb.

2. Equalization

This tutorial demonstrates how to use the ParametricEqualizer to adjust the frequency balance of an audio stream.

Steps:

  1. Create a new console application:

    dotnet new console -o Equalization
    cd Equalization
  2. Install the SoundFlow NuGet package:

    dotnet add package SoundFlow
  3. Replace the contents of Program.cs with the following code:

    using SoundFlow.Abstracts;
    using SoundFlow.Backends.MiniAudio;
    using SoundFlow.Components;
    using SoundFlow.Enums;
    using SoundFlow.Modifiers;
    using SoundFlow.Providers;
    namespace Equalization;
    internal static class Program
    {
    private static void Main(string[] args)
    {
    // Initialize the audio engine.
    using var audioEngine = new MiniAudioEngine(44100, Capability.Playback);
    // Create a SoundPlayer and load an audio file.
    var player = new SoundPlayer(new StreamDataProvider(File.OpenRead("path/to/your/audiofile.wav")));
    // Create a ParametricEqualizer.
    var equalizer = new ParametricEqualizer(AudioEngine.Channels);
    // Add some equalizer bands:
    // Boost low frequencies (bass)
    equalizer.AddBand(FilterType.LowShelf, 100, 6, 0.7f, 0);
    // Cut mid frequencies
    equalizer.AddBand(FilterType.Peaking, 1000, -4, 2, 0);
    // Boost high frequencies (treble)
    equalizer.AddBand(FilterType.HighShelf, 10000, 5, 0.7f, 0);
    // Add the equalizer to the player.
    player.AddModifier(equalizer);
    // Add the player to the master mixer.
    Mixer.Master.AddComponent(player);
    // Start playback.
    player.Play();
    // Keep the console application running until playback finishes or the user presses a key.
    Console.WriteLine("Playing audio with equalization... Press any key to stop.");
    Console.ReadKey();
    // Stop playback.
    player.Stop();
    // Remove the player from the mixer.
    Mixer.Master.RemoveComponent(player);
    }
    }
  4. Replace "path/to/your/audiofile.wav" with the actual path to an audio file.

  5. Build and run the application:

    dotnet run

Explanation:

This code initializes the AudioEngine, creates a SoundPlayer, loads an audio file, creates a ParametricEqualizer, adds three equalizer bands (low-shelf boost, peaking cut, high-shelf boost), adds the equalizer to the player, adds the player to the Master mixer, and starts playback. You will hear the audio with the equalization applied.

Experiment with different filter types (FilterType), frequencies, gain values, and Q values to shape the sound to your liking.

3. Chorus and Delay

This tutorial demonstrates how to apply chorus and delay effects using the ChorusModifier and DelayModifier.

Steps:

  1. Create a new console application:

    dotnet new console -o ChorusDelay
    cd ChorusDelay
  2. Install the SoundFlow NuGet package:

    dotnet add package SoundFlow
  3. Replace the contents of Program.cs with the following code:

    using SoundFlow.Abstracts;
    using SoundFlow.Backends.MiniAudio;
    using SoundFlow.Components;
    using SoundFlow.Enums;
    using SoundFlow.Modifiers;
    using SoundFlow.Providers;
    namespace ChorusDelay;
    internal static class Program
    {
    private static void Main(string[] args)
    {
    // Initialize the audio engine.
    using var audioEngine = new MiniAudioEngine(44100, Capability.Playback);
    // Create a SoundPlayer and load an audio file.
    var player = new SoundPlayer(new StreamDataProvider(File.OpenRead("path/to/your/audiofile.wav")));
    // Create a ChorusModifier.
    var chorus = new ChorusModifier(
    depth: 25f, // Depth (in milliseconds)
    rate: 0.8f, // Rate (in Hz)
    feedback: 0.5f, // Feedback amount
    wetDryMix: 0.5f, // Wet/dry mix (0 = dry, 1 = wet)
    maxDelayLength: 500 // Maximum delay length (in milliseconds)
    );
    // Create a DelayModifier.
    var delay = new DelayModifier(
    delayLength: 500, // Delay length (in milliseconds)
    feedback: 0.6f, // Feedback amount
    wetMix: 0.4f, // Wet/dry mix
    cutoffFrequency: 4000 // Cutoff frequency for the low-pass filter
    );
    // Add the chorus and delay modifiers to the player.
    player.AddModifier(chorus);
    player.AddModifier(delay);
    // Add the player to the master mixer.
    Mixer.Master.AddComponent(player);
    // Start playback.
    player.Play();
    // Keep the console application running until playback finishes or the user presses a key.
    Console.WriteLine("Playing audio with chorus and delay... Press any key to stop.");
    Console.ReadKey();
    // Stop playback.
    player.Stop();
    // Remove the player from the mixer.
    Mixer.Master.RemoveComponent(player);
    }
    }
  4. Replace "path/to/your/audiofile.wav" with the actual path to an audio file.

  5. Build and run the application:

    dotnet run

Explanation:

This code initializes the AudioEngine, creates a SoundPlayer, loads an audio file, creates ChorusModifier and DelayModifier instances with custom settings, adds both modifiers to the player (they will be applied in the order they are added), adds the player to the Master mixer, and starts playback. You will hear the audio with both chorus and delay effects applied.

Experiment with different parameter values for the chorus and delay modifiers to create a wide range of sonic textures.

4. Compression

This tutorial demonstrates how to use the CompressorModifier to reduce the dynamic range of an audio stream.

Steps:

  1. Create a new console application:

    dotnet new console -o Compression
    cd Compression
  2. Install the SoundFlow NuGet package:

    dotnet add package SoundFlow
  3. Replace the contents of Program.cs with the following code:

    using SoundFlow.Abstracts;
    using SoundFlow.Backends.MiniAudio;
    using SoundFlow.Components;
    using SoundFlow.Enums;
    using SoundFlow.Modifiers;
    using SoundFlow.Providers;
    namespace Compression;
    internal static class Program
    {
    private static void Main(string[] args)
    {
    // Initialize the audio engine.
    using var audioEngine = new MiniAudioEngine(44100, Capability.Playback);
    // Create a SoundPlayer and load an audio file.
    var player = new SoundPlayer(new StreamDataProvider(File.OpenRead("path/to/your/audiofile.wav")));
    // Create a CompressorModifier.
    var compressor = new CompressorModifier(
    threshold: -20f, // Threshold (in dB)
    ratio: 4f, // Compression ratio
    attack: 10f, // Attack time (in milliseconds)
    release: 100f, // Release time (in milliseconds)
    knee: 5f, // Knee width (in dB)
    makeupGain: 6f // Makeup gain (in dB)
    );
    // Add the compressor to the player.
    player.AddModifier(compressor);
    // Add the player to the master mixer.
    Mixer.Master.AddComponent(player);
    // Start playback.
    player.Play();
    // Keep the console application running until playback finishes or the user presses a key.
    Console.WriteLine("Playing audio with compression... Press any key to stop.");
    Console.ReadKey();
    // Stop playback.
    player.Stop();
    // Remove the player from the mixer.
    Mixer.Master.RemoveComponent(player);
    }
    }
  4. Replace "path/to/your/audiofile.wav" with the actual path to an audio file.

  5. Build and run the application:

    dotnet run

Explanation:

This code initializes the AudioEngine, creates a SoundPlayer, loads an audio file, creates a CompressorModifier with specific settings, adds the compressor to the player, adds the player to the Master mixer, and starts playback. You will hear the audio with the compression effect applied, resulting in a more consistent volume level.

Experiment with different values for threshold, ratio, attack, release, knee, and makeupGain to understand how they affect the compression.

5. Noise Reduction

This tutorial demonstrates how to use the NoiseReductionModifier to reduce noise in an audio stream.

Steps:

  1. Create a new console application:

    dotnet new console -o NoiseReduction
    cd NoiseReduction
  2. Install the SoundFlow NuGet package:

    dotnet add package SoundFlow
  3. Replace the contents of Program.cs with the following code:

    using SoundFlow.Abstracts;
    using SoundFlow.Backends.MiniAudio;
    using SoundFlow.Components;
    using SoundFlow.Enums;
    using SoundFlow.Modifiers;
    using SoundFlow.Providers;
    namespace NoiseReduction;
    internal static class Program
    {
    private static void Main(string[] args)
    {
    // Initialize the audio engine.
    using var audioEngine = new MiniAudioEngine(44100, Capability.Playback);
    // Create a SoundPlayer and load a noisy audio file.
    var player = new SoundPlayer(new StreamDataProvider(File.OpenRead("path/to/your/noisy/audiofile.wav")));
    // Create a NoiseReductionModifier.
    var noiseReducer = new NoiseReductionModifier(
    fftSize: 2048, // FFT size (power of 2)
    alpha: 3f, // Smoothing factor for noise estimation
    beta: 0.001f, // Minimum gain for noise reduction
    smoothingFactor: 0.8f, // Smoothing factor for gain
    gain: 1.0f, // Post-processing gain
    noiseFrames: 10 // Number of initial frames to use for noise estimation
    );
    // Add the noise reducer to the player.
    player.AddModifier(noiseReducer);
    // Add the player to the master mixer.
    Mixer.Master.AddComponent(player);
    // Start playback.
    player.Play();
    // Keep the console application running until playback finishes or the user presses a key.
    Console.WriteLine("Playing audio with noise reduction... Press any key to stop.");
    Console.ReadKey();
    // Stop playback.
    player.Stop();
    // Remove the player from the mixer.
    Mixer.Master.RemoveComponent(player);
    }
    }
  4. Replace "path/to/your/noisy/audiofile.wav" with the actual path to a noisy audio file.

  5. Build and run the application:

    dotnet run

Explanation:

This code initializes the AudioEngine, creates a SoundPlayer, loads a noisy audio file, creates a NoiseReductionModifier with specific settings, adds the noise reducer to the player, adds the player to the Master mixer, and starts playback. You should hear a reduction in the noise level of the audio.

Experiment with different values for fftSize, alpha, beta, smoothingFactor, gain, and noiseFrames to fine-tune the noise reduction.

6. Mixing

This tutorial demonstrates how to use the Mixer to combine multiple audio sources.

Steps:

  1. Create a new console application:

    dotnet new console -o Mixing
    cd Mixing
  2. Install the SoundFlow NuGet package:

    dotnet add package SoundFlow
  3. Replace the contents of Program.cs with the following code:

    using SoundFlow.Abstracts;
    using SoundFlow.Backends.MiniAudio;
    using SoundFlow.Components;
    using SoundFlow.Enums;
    using SoundFlow.Providers;
    namespace Mixing;
    internal static class Program
    {
    private static void Main(string[] args)
    {
    // Initialize the audio engine.
    using var audioEngine = new MiniAudioEngine(44100, Capability.Playback);
    // Create two SoundPlayer instances and load different audio files.
    var player1 = new SoundPlayer(new StreamDataProvider(File.OpenRead("path/to/your/audiofile1.wav")));
    var player2 = new SoundPlayer(new StreamDataProvider(File.OpenRead("path/to/your/audiofile2.wav")));
    // Create an Oscillator that generates a sine wave.
    var oscillator = new Oscillator
    {
    Frequency = 440, // 440 Hz (A4 note)
    Amplitude = 0.5f,
    Type = Oscillator.WaveformType.Sine
    };
    // Add the players and the oscillator to the master mixer.
    Mixer.Master.AddComponent(player1);
    Mixer.Master.AddComponent(player2);
    Mixer.Master.AddComponent(oscillator);
    // Start playback for both players.
    player1.Play();
    player2.Play();
    // Keep the console application running until the user presses a key.
    Console.WriteLine("Playing mixed audio... Press any key to stop.");
    Console.ReadKey();
    // Stop playback for both players.
    player1.Stop();
    player2.Stop();
    // Remove the components from the mixer.
    Mixer.Master.RemoveComponent(player1);
    Mixer.Master.RemoveComponent(player2);
    Mixer.Master.RemoveComponent(oscillator);
    }
    }
  4. Replace "path/to/your/audiofile1.wav" and "path/to/your/audiofile2.wav" with the actual paths to two different audio files.

  5. Build and run the application:

    dotnet run

Explanation:

This code initializes the AudioEngine, creates two SoundPlayer instances, loads two different audio files, creates an Oscillator that generates a sine wave, adds all three components to the Master mixer, and starts playback for the players. You will hear the two audio files and the sine wave mixed together.

Experiment with adding more sound sources to the mixer and adjusting their individual volumes and panning using the Volume and Pan properties of each SoundComponent.

Audio Device Management

This tutorial demonstrates how to list available audio devices and switch the playback device.

Steps:

  1. Create a new console application and install SoundFlow.

  2. Replace Program.cs:

    using SoundFlow.Abstracts;
    using SoundFlow.Backends.MiniAudio;
    using SoundFlow.Components;
    using SoundFlow.Enums;
    using SoundFlow.Providers;
    using SoundFlow.Structs; // For DeviceInfo
    using System;
    using System.IO;
    using System.Linq;
    namespace DeviceSwitcher;
    internal static class Program
    {
    private static void Main(string[] args)
    {
    // Initialize engine (MiniAudioEngine used here)
    using var engine = new MiniAudioEngine(48000, Capability.Playback);
    Console.WriteLine("Available Playback Devices:");
    engine.UpdateDevicesInfo();
    for (int i = 0; i < engine.PlaybackDeviceCount; i++)
    {
    Console.WriteLine($"{i}: {engine.PlaybackDevices[i].Name} {(engine.PlaybackDevices[i].IsDefault ? "(Default)" : "")}");
    }
    Console.WriteLine($"Current Playback Device: {engine.CurrentPlaybackDevice?.Name ?? "None selected"}");
    // Simple audio playback setup
    Console.Write("Enter path to an audio file to play: ");
    string? filePath = Console.ReadLine()?.Trim('"');
    if (string.IsNullOrEmpty(filePath) || !File.Exists(filePath))
    {
    Console.WriteLine("Invalid file path. Exiting.");
    return;
    }
    using var dataProvider = new StreamDataProvider(File.OpenRead(filePath));
    var player = new SoundPlayer(dataProvider);
    Mixer.Master.AddComponent(player);
    player.Play();
    Console.WriteLine($"Playing on {engine.CurrentPlaybackDevice?.Name ?? "default device"}.");
    while (true)
    {
    Console.Write("Enter device number to switch to, 'r' to refresh list, or 'q' to quit: ");
    string? input = Console.ReadLine();
    if (input?.ToLower() == "q") break;
    if (input?.ToLower() == "r")
    {
    engine.UpdateDevicesInfo();
    Console.WriteLine("\nAvailable Playback Devices (Refreshed):");
    for (int i = 0; i < engine.PlaybackDeviceCount; i++)
    {
    Console.WriteLine($"{i}: {engine.PlaybackDevices[i].Name} {(engine.PlaybackDevices[i].IsDefault ? "(Default)" : "")}");
    }
    Console.WriteLine($"Current Playback Device: {engine.CurrentPlaybackDevice?.Name ?? "None selected"}");
    continue;
    }
    if (int.TryParse(input, out int deviceIndex))
    {
    if (deviceIndex >= 0 && deviceIndex < engine.PlaybackDeviceCount)
    {
    try
    {
    Console.WriteLine($"Switching to {engine.PlaybackDevices[deviceIndex].Name}...");
    engine.SwitchDevice(engine.PlaybackDevices[deviceIndex], DeviceType.Playback);
    Console.WriteLine($"Successfully switched to {engine.CurrentPlaybackDevice?.Name}.");
    }
    catch (Exception ex)
    {
    Console.WriteLine($"Error switching device: {ex.Message}");
    }
    }
    else
    {
    Console.WriteLine("Invalid device number.");
    }
    }
    else
    {
    Console.WriteLine("Invalid input.");
    }
    }
    player.Stop();
    Mixer.Master.RemoveComponent(player);
    }
    }
  3. Build and run. You’ll see a list of playback devices. Enter the number of the device you want to switch to while audio is playing.

Explanation: The AudioEngine (here MiniAudioEngine) provides UpdateDevicesInfo() to get device lists (PlaybackDevices, CaptureDevices). SwitchDevice() or SwitchDevices() can then be used to change the active audio output/input.

Advanced Voice Processing with WebRTC APM

This tutorial shows how to use the WebRtcApmModifier for real-time noise suppression and echo cancellation on microphone input. For offline file processing, refer to the WebRTC APM Extension documentation.

Prerequisites:

  • SoundFlow core package.
  • SoundFlow.Extensions.WebRtc.Apm NuGet package.

Steps:

  1. Create a console app and install both SoundFlow packages.

  2. Replace Program.cs:

    using SoundFlow.Abstracts;
    using SoundFlow.Backends.MiniAudio;
    using SoundFlow.Components;
    using SoundFlow.Enums;
    using SoundFlow.Providers;
    using SoundFlow.Extensions.WebRtc.Apm; // For enums like NoiseSuppressionLevel
    using SoundFlow.Extensions.WebRtc.Apm.Modifiers; // For WebRtcApmModifier
    using System;
    namespace WebRtcApmDemo;
    internal static class Program
    {
    private static void Main(string[] args)
    {
    // Initialize AudioEngine. 48kHz is good for WebRTC APM.
    // Capability.Mixed is needed for AEC (to get playback audio for far-end).
    using var audioEngine = new MiniAudioEngine(48000, Capability.Mixed, channels: 1); // Mono for voice
    // Setup microphone input
    using var micProvider = new MicrophoneDataProvider();
    var micPlayer = new SoundPlayer(micProvider) { Name = "MicrophoneInput" };
    // Instantiate and configure WebRtcApmModifier
    var apmModifier = new WebRtcApmModifier(
    aecEnabled: true, // Enable Acoustic Echo Cancellation
    aecMobileMode: false,
    aecLatencyMs: 40, // Adjust based on your system's latency
    nsEnabled: true, // Enable Noise Suppression
    nsLevel: NoiseSuppressionLevel.High,
    agc1Enabled: true, // Enable Automatic Gain Control (AGC1)
    agcMode: GainControlMode.AdaptiveDigital,
    agcTargetLevel: -6, // Target level in dBFS
    agcLimiter: true,
    hpfEnabled: true // Enable High Pass Filter
    );
    micPlayer.AddModifier(apmModifier);
    // To test AEC, you might want to play some audio simultaneously
    // For simplicity, this example focuses on mic input processing.
    // If you play audio through Mixer.Master, AEC will use it as far-end.
    Mixer.Master.AddComponent(micPlayer);
    micProvider.StartCapture();
    micPlayer.Play();
    Console.WriteLine("Processing microphone with WebRTC APM (AEC, NS, AGC, HPF)...");
    Console.WriteLine("Speak into your microphone. Press any key to stop.");
    Console.ReadKey();
    micPlayer.Stop();
    micProvider.StopCapture();
    Mixer.Master.RemoveComponent(micPlayer);
    apmModifier.Dispose(); // Important!
    }
    }
  3. Build and run. Speak into your microphone.

Explanation: This setup processes microphone audio in real-time.

  • AudioEngine is set to Capability.Mixed and a compatible sample rate (48kHz).
  • WebRtcApmModifier is configured with AEC, NS, AGC, and HPF enabled.
  • AEC requires a far-end signal. The modifier automatically listens to AudioEngine.OnAudioProcessed for Playback capability audio to use as the far-end reference. If you were playing music through another SoundPlayer added to Mixer.Master, that would be the far-end signal.
  • Remember to Dispose() the WebRtcApmModifier to release native resources.

Analysis

1. Level Metering

This tutorial demonstrates how to use the LevelMeterAnalyzer to measure the RMS (root mean square) and peak levels of an audio stream.

Steps:

  1. Create a new console application:

    dotnet new console -o LevelMetering
    cd LevelMetering
  2. Install the SoundFlow NuGet package:

    dotnet add package SoundFlow
  3. Replace the contents of Program.cs with the following code:

    using SoundFlow.Abstracts;
    using SoundFlow.Backends.MiniAudio;
    using SoundFlow.Components;
    using SoundFlow.Enums;
    using SoundFlow.Providers;
    using SoundFlow.Visualization;
    namespace LevelMetering;
    internal static class Program
    {
    private static void Main(string[] args)
    {
    // Initialize the audio engine.
    using var audioEngine = new MiniAudioEngine(44100, Capability.Playback);
    // Create a SoundPlayer and load an audio file.
    var player = new SoundPlayer(new StreamDataProvider(File.OpenRead("path/to/your/audiofile.wav")));
    // Create a LevelMeterAnalyzer.
    var levelMeter = new LevelMeterAnalyzer();
    // Connect the player's output to the level meter's input.
    player.AddAnalyzer(levelMeter);
    // Add the player to the master mixer (the level meter doesn't produce output).
    Mixer.Master.AddComponent(player);
    // Start playback.
    player.Play();
    // Create a timer to periodically display the RMS and peak levels.
    var timer = new System.Timers.Timer(100); // Update every 100 milliseconds
    timer.Elapsed += (sender, e) =>
    {
    Console.WriteLine($"RMS Level: {levelMeter.Rms:F4}, Peak Level: {levelMeter.Peak:F4}");
    };
    timer.Start();
    // Keep the console application running until playback finishes or the user presses a key.
    Console.WriteLine("Playing audio and displaying level meter... Press any key to stop.");
    Console.ReadKey();
    // Stop playback and clean up.
    timer.Stop();
    player.Stop();
    Mixer.Master.RemoveComponent(player);
    }
    }
  4. Replace "path/to/your/audiofile.wav" with the actual path to an audio file.

  5. Build and run the application:

    dotnet run

Explanation:

This code initializes the AudioEngine, creates a SoundPlayer, loads an audio file, creates a LevelMeterAnalyzer, connects the player’s output to the analyzer’s input, adds the player to the Master mixer, and starts playback. It then creates a timer that fires every 100 milliseconds, printing the current RMS and peak levels to the console.

2. Spectrum Analysis

This tutorial demonstrates how to use the SpectrumAnalyzer to analyze the frequency content of an audio stream using the Fast Fourier Transform (FFT).

Steps:

  1. Create a new console application:

    dotnet new console -o SpectrumAnalysis
    cd SpectrumAnalysis
  2. Install the SoundFlow NuGet package:

    dotnet add package SoundFlow
  3. Replace the contents of Program.cs with the following code:

    using SoundFlow.Abstracts;
    using SoundFlow.Backends.MiniAudio;
    using SoundFlow.Components;
    using SoundFlow.Enums;
    using SoundFlow.Providers;
    using SoundFlow.Visualization;
    namespace SpectrumAnalysis;
    internal static class Program
    {
    private static void Main(string[] args)
    {
    // Initialize the audio engine.
    using var audioEngine = new MiniAudioEngine(44100, Capability.Playback);
    // Create a SoundPlayer and load an audio file.
    var player = new SoundPlayer(new StreamDataProvider(File.OpenRead("path/to/your/audiofile.wav")));
    // Create a SpectrumAnalyzer with an FFT size of 2048.
    var spectrumAnalyzer = new SpectrumAnalyzer(fftSize: 2048);
    // Connect the player's output to the spectrum analyzer's input.
    player.AddAnalyzer(spectrumAnalyzer);
    // Add the player to the master mixer (the spectrum analyzer doesn't produce output).
    Mixer.Master.AddComponent(player);
    // Start playback.
    player.Play();
    // Create a timer to periodically display the spectrum data.
    var timer = new System.Timers.Timer(100); // Update every 100 milliseconds
    timer.Elapsed += (sender, e) =>
    {
    // Get the spectrum data from the analyzer.
    var spectrumData = spectrumAnalyzer.SpectrumData;
    // Print the magnitude of the first few frequency bins.
    if (spectrumData.Length > 0)
    {
    Console.Write("Spectrum: ");
    for (int i = 0; i < Math.Min(10, spectrumData.Length); i++)
    {
    Console.Write($"{spectrumData[i]:F2} ");
    }
    Console.WriteLine();
    }
    };
    timer.Start();
    // Keep the console application running until playback finishes or the user presses a key.
    Console.WriteLine("Playing audio and displaying spectrum data... Press any key to stop.");
    Console.ReadKey();
    // Stop playback and clean up.
    timer.Stop();
    player.Stop();
    Mixer.Master.RemoveComponent(player);
    }
    }
  4. Replace "path/to/your/audiofile.wav" with the actual path to an audio file.

  5. Build and run the application:

    dotnet run

Explanation:

This code initializes the AudioEngine, creates a SoundPlayer, loads an audio file, creates a SpectrumAnalyzer with an FFT size of 2048, connects the player’s output to the analyzer’s input, adds the player to the Master mixer, and starts playback. It then creates a timer that fires every 100 milliseconds, printing the magnitude of the first 10 frequency bins of the spectrum data to the console.

3. Voice Activity Detection

This tutorial demonstrates how to use the VoiceActivityDetector to detect the presence of human voice in an audio stream.

Steps:

  1. Create a new console application:

    dotnet new console -o VoiceActivityDetection
    cd VoiceActivityDetection
  2. Install the SoundFlow NuGet package:

    dotnet add package SoundFlow
  3. Replace the contents of Program.cs with the following code:

    using SoundFlow.Abstracts;
    using SoundFlow.Backends.MiniAudio;
    using SoundFlow.Components;
    using SoundFlow.Enums;
    using SoundFlow.Providers;
    namespace VoiceActivityDetection;
    internal static class Program
    {
    private static void Main(string[] args)
    {
    // Initialize the audio engine (either for playback or recording).
    using var audioEngine = new MiniAudioEngine(44100, Capability.Playback); // Or Capability.Record for microphone input
    // Create a SoundPlayer and load an audio file (if using playback).
    // If using recording, you don't need a SoundPlayer.
    var player = new SoundPlayer(new StreamDataProvider(File.OpenRead("path/to/your/audiofile.wav")));
    // Create a VoiceActivityDetector.
    var vad = new VoiceActivityDetector();
    // Connect the VAD as an analyzer to the player's output (or microphone input).
    player.AddAnalyzer(vad);
    // Subscribe to the SpeechDetected event.
    vad.SpeechDetected += isDetected => Console.WriteLine($"Speech detected: {isDetected}");
    // Add the player to the master mixer (if using playback).
    Mixer.Master.AddComponent(player);
    // Start playback (if using playback).
    player.Play();
    // If using recording, you would typically start a Recorder here instead of a SoundPlayer.
    // Keep the console application running until the user presses a key.
    Console.WriteLine("Analyzing audio for voice activity... Press any key to stop.");
    Console.ReadKey();
    // Stop playback (if using playback) and clean up.
    player.Stop();
    Mixer.Master.RemoveComponent(player);
    }
    }
  4. Replace "path/to/your/audiofile.wav" with the actual path to an audio file (if using playback).

  5. Build and run the application:

    dotnet run

Explanation:

This code initializes the AudioEngine (either for playback or recording), creates a SoundPlayer and loads an audio file (if using playback), creates a VoiceActivityDetector, connects the player’s output (or microphone input) to the VAD, subscribes to the SpeechDetected event to print messages to the console when speech is detected or not detected, adds the player to the Master mixer (if using playback), and starts playback.

If you want to use the microphone for input instead of a SoundPlayer, you would need to create a custom SoundComponent that captures audio from the microphone and connects its output to the VAD.

Visualization

1. Level Meter

This tutorial demonstrates how to create a simple console-based level meter using the LevelMeterAnalyzer and LevelMeterVisualizer.

Steps:

  1. Create a new console application:

    dotnet new console -o LevelMeterVisualization
    cd LevelMeterVisualization
  2. Install the SoundFlow NuGet package:

    dotnet add package SoundFlow
  3. Replace the contents of Program.cs with the following code:

    using SoundFlow.Abstracts;
    using SoundFlow.Backends.MiniAudio;
    using SoundFlow.Components;
    using SoundFlow.Enums;
    using SoundFlow.Providers;
    using SoundFlow.Visualization;
    using System.Diagnostics;
    namespace LevelMeterVisualization;
    internal static class Program
    {
    private static void Main(string[] args)
    {
    // Initialize the audio engine.
    using var audioEngine = new MiniAudioEngine(44100, Capability.Playback);
    // Create a SoundPlayer and load an audio file.
    var player = new SoundPlayer(new StreamDataProvider(File.OpenRead("path/to/your/audiofile.wav")));
    // Create a LevelMeterAnalyzer.
    var levelMeterAnalyzer = new LevelMeterAnalyzer();
    // Create a LevelMeterVisualizer.
    var levelMeterVisualizer = new LevelMeterVisualizer(levelMeterAnalyzer);
    // Connect the player's output to the level meter analyzer's input.
    player.AddAnalyzer(levelMeterAnalyzer);
    // Add the player to the master mixer (the level meter analyzer doesn't produce output).
    Mixer.Master.AddComponent(player);
    // Start playback.
    player.Play();
    // Subscribe to the VisualizationUpdated event to trigger a redraw.
    levelMeterVisualizer.VisualizationUpdated += (sender, e) =>
    {
    DrawLevelMeter(levelMeterAnalyzer.Rms, levelMeterAnalyzer.Peak);
    };
    // Start a timer to update the visualization.
    var stopwatch = new Stopwatch();
    stopwatch.Start();
    var timer = new System.Timers.Timer(1000 / 60); // Update at approximately 60 FPS
    timer.Elapsed += (sender, e) =>
    {
    levelMeterVisualizer.ProcessOnAudioData(Array.Empty<float>());
    levelMeterVisualizer.Render(new ConsoleVisualizationContext());
    };
    timer.Start();
    // Keep the console application running until playback finishes or the user presses a key.
    Console.WriteLine("Playing audio and displaying level meter... Press any key to stop.");
    Console.ReadKey();
    // Stop playback and clean up.
    timer.Stop();
    player.Stop();
    Mixer.Master.RemoveComponent(player);
    levelMeterVisualizer.Dispose();
    }
    // Helper method to draw a simple console-based level meter.
    private static void DrawLevelMeter(float rms, float peak)
    {
    int barLength = (int)(rms * 40); // Scale the RMS value to a bar length
    int peakBarLength = (int)(peak * 40); // Scale the peak value to a bar length
    Console.SetCursorPosition(0, 0);
    Console.Write("RMS: ");
    Console.Write(new string('#', barLength));
    Console.Write(new string(' ', 40 - barLength));
    Console.Write("|\n");
    Console.SetCursorPosition(0, 1);
    Console.Write("Peak: ");
    Console.Write(new string('#', peakBarLength));
    Console.Write(new string(' ', 40 - peakBarLength));
    Console.Write("|");
    Console.SetCursorPosition(0, 3);
    }
    }
    // Simple IVisualizationContext implementation for console output.
    public class ConsoleVisualizationContext : IVisualizationContext
    {
    public void Clear()
    {
    // No need to clear the console in this example.
    }
    public void DrawLine(float x1, float y1, float x2, float y2, Color color, float thickness = 1)
    {
    // Simple line drawing not implemented for this example.
    }
    public void DrawRectangle(float x, float y, float width, float height, Color color)
    {
    // Simple rectangle drawing not implemented for this example.
    }
    }
  4. Replace "path/to/your/audiofile.wav" with the actual path to an audio file.

  5. Build and run the application:

    dotnet run

Explanation:

This code initializes the AudioEngine, creates a SoundPlayer, loads an audio file, creates a LevelMeterAnalyzer and a LevelMeterVisualizer, connects the player’s output to the analyzer, adds the player to the Master mixer, and starts playback. It then subscribes to the VisualizationUpdated event of the visualizer to redraw the level meter when the data changes. Finally, it starts a timer that calls ProcessOnAudioData and Render on the visualizer approximately 60 times per second. The DrawLevelMeter method is a helper function that draws a simple console-based level meter using # characters.

2. Waveform

This tutorial demonstrates how to use the WaveformVisualizer to display the waveform of an audio stream.

Steps:

  1. Create a new console application:

    dotnet new console -o WaveformVisualization
    cd WaveformVisualization
  2. Install the SoundFlow NuGet package:

    dotnet add package SoundFlow
  3. Replace the contents of Program.cs with the following code:

    using SoundFlow.Abstracts;
    using SoundFlow.Backends.MiniAudio;
    using SoundFlow.Components;
    using SoundFlow.Enums;
    using SoundFlow.Providers;
    using SoundFlow.Visualization;
    using System.Diagnostics;
    namespace WaveformVisualization;
    internal static class Program
    {
    private static void Main(string[] args)
    {
    // Initialize the audio engine.
    using var audioEngine = new MiniAudioEngine(44100, Capability.Playback);
    // Create a SoundPlayer and load an audio file.
    var player = new SoundPlayer(new StreamDataProvider(File.OpenRead("path/to/your/audiofile.wav")));
    // Create a WaveformVisualizer.
    var waveformVisualizer = new WaveformVisualizer();
    // Add the player to the master mixer.
    Mixer.Master.AddComponent(player);
    // Start playback.
    player.Play();
    // Subscribe to the VisualizationUpdated event to trigger a redraw.
    waveformVisualizer.VisualizationUpdated += (sender, e) =>
    {
    DrawWaveform(waveformVisualizer.Waveform);
    };
    AudioEngine.OnAudioProcessed += waveformVisualizer.ProcessOnAudioData;
    // Keep the console application running until playback finishes or the user presses a key.
    Console.WriteLine("Playing audio and displaying waveform... Press any key to stop.");
    Console.ReadKey();
    // Stop playback and clean up.
    player.Stop();
    Mixer.Master.RemoveComponent(player);
    waveformVisualizer.Dispose();
    }
    // Helper method to draw a simple console-based waveform.
    private static void DrawWaveform(List<float> waveform)
    {
    Console.Clear();
    int consoleWidth = Console.WindowWidth;
    int consoleHeight = Console.WindowHeight;
    if (waveform.Count == 0)
    {
    return;
    }
    for (int i = 0; i < consoleWidth; i++)
    {
    // Calculate the index into the waveform data, mapping the console width to the waveform length.
    int waveformIndex = (int)(i * (waveform.Count / (float)consoleWidth));
    waveformIndex = Math.Clamp(waveformIndex, 0, waveform.Count - 1);
    // Normalize the waveform value to the console height.
    float sampleValue = waveform[waveformIndex];
    int consoleY = (int)((sampleValue + 1) * 0.5 * consoleHeight); // Map [-1, 1] to [0, consoleHeight]
    consoleY = Math.Clamp(consoleY, 0, consoleHeight - 1);
    // Draw a character at the calculated position.
    Console.SetCursorPosition(i, consoleHeight - consoleY - 1);
    Console.Write("*");
    }
    Console.SetCursorPosition(0, consoleHeight - 1);
    }
    }
  4. Replace "path/to/your/audiofile.wav" with the actual path to an audio file.

  5. Build and run the application:

    dotnet run

Explanation:

This code initializes the AudioEngine, creates a SoundPlayer, loads an audio file, creates a WaveformVisualizer, adds the player to the Master mixer, and starts playback. It subscribes to the VisualizationUpdated event of the visualizer to redraw the waveform when the data changes. The DrawWaveform method is a helper function that draws a simple console-based waveform using * characters. The AudioEngine.OnAudioProcessed is used to send chunks of processed audio data to the WaveformVisualizer.

3. Spectrum Analyzer

This tutorial demonstrates how to create a simple console-based spectrum analyzer using the SpectrumAnalyzer and SpectrumVisualizer.

Steps:

  1. Create a new console application:

    dotnet new console -o SpectrumAnalyzerVisualization
    cd SpectrumAnalyzerVisualization
  2. Install the SoundFlow NuGet package:

    dotnet add package SoundFlow
  3. Replace the contents of Program.cs with the following code:

    using SoundFlow.Abstracts;
    using SoundFlow.Backends.MiniAudio;
    using SoundFlow.Components;
    using SoundFlow.Enums;
    using SoundFlow.Providers;
    using SoundFlow.Visualization;
    using System.Diagnostics;
    namespace SpectrumAnalyzerVisualization;
    internal static class Program
    {
    private static void Main(string[] args)
    {
    // Initialize the audio engine.
    using var audioEngine = new MiniAudioEngine(44100, Capability.Playback);
    // Create a SoundPlayer and load an audio file.
    var player = new SoundPlayer(new StreamDataProvider(File.OpenRead("path/to/your/audiofile.wav")));
    // Create a SpectrumAnalyzer with an FFT size of 2048.
    var spectrumAnalyzer = new SpectrumAnalyzer(fftSize: 2048);
    // Create a SpectrumVisualizer.
    var spectrumVisualizer = new SpectrumVisualizer(spectrumAnalyzer);
    // Connect the player's output to the spectrum analyzer's input.
    player.AddAnalyzer(spectrumAnalyzer);
    // Add the player to the master mixer (the spectrum analyzer doesn't produce output).
    Mixer.Master.AddComponent(player);
    // Start playback.
    player.Play();
    // Subscribe to the VisualizationUpdated event to trigger a redraw.
    spectrumVisualizer.VisualizationUpdated += (sender, e) =>
    {
    DrawSpectrum(spectrumAnalyzer.SpectrumData);
    };
    // Start a timer to update the visualization.
    var stopwatch = new Stopwatch();
    stopwatch.Start();
    var timer = new System.Timers.Timer(1000 / 60); // Update at approximately 60 FPS
    timer.Elapsed += (sender, e) =>
    {
    spectrumVisualizer.ProcessOnAudioData(Array.Empty<float>());
    spectrumVisualizer.Render(new ConsoleVisualizationContext());
    };
    timer.Start();
    // Keep the console application running until playback finishes or the user presses a key.
    Console.WriteLine("Playing audio and displaying spectrum analyzer... Press any key to stop.");
    Console.ReadKey();
    // Stop playback and clean up.
    timer.Stop();
    player.Stop();
    Mixer.Master.RemoveComponent(player);
    spectrumVisualizer.Dispose();
    }
    // Helper method to draw a simple console-based spectrum analyzer.
    private static void DrawSpectrum(ReadOnlySpan<float> spectrumData)
    {
    Console.Clear();
    int consoleWidth = Console.WindowWidth;
    int consoleHeight = Console.WindowHeight;
    if (spectrumData.IsEmpty)
    {
    return;
    }
    int barWidth = Math.Max(1, consoleWidth / spectrumData.Length); // Ensure at least 1 character per bar
    for (int i = 0; i < spectrumData.Length; i++)
    {
    // Scale the magnitude to the console height.
    float magnitude = spectrumData[i];
    int barHeight = (int)(magnitude * consoleHeight / 2); // Adjust scaling factor as needed
    barHeight = Math.Clamp(barHeight, 0, consoleHeight - 1);
    // Draw a vertical bar for each frequency bin.
    for (int j = 0; j < barHeight; j++)
    {
    for (int w = 0; w < barWidth; w++)
    {
    if (Console.CursorLeft < consoleWidth - 1) {
    Console.SetCursorPosition(i * barWidth + w, consoleHeight - 1 - j);
    Console.Write("*");
    }
    }
    }
    }
    Console.SetCursorPosition(0, consoleHeight - 1);
    }
    }
    // Simple IVisualizationContext implementation for console output.
    public class ConsoleVisualizationContext : IVisualizationContext
    {
    public void Clear()
    {
    // No need to clear the console in this example.
    }
    public void DrawLine(float x1, float y1, float x2, float y2, Color color, float thickness = 1f)
    {
    // Simple line drawing not implemented for this example.
    }
    public void DrawRectangle(float x, float y, float width, float height, Color color)
    {
    // Simple rectangle drawing not implemented for this example.
    }
    }
  4. Replace "path/to/your/audiofile.wav" with the actual path to an audio file.

  5. Build and run the application:

    dotnet run

Explanation:

This code initializes the AudioEngine, creates a SoundPlayer, loads an audio file, creates a SpectrumAnalyzer and a SpectrumVisualizer, connects the player’s output to the analyzer, adds the player to the Master mixer, and starts playback. It subscribes to the VisualizationUpdated event of the visualizer to redraw the spectrum when the data changes. The DrawSpectrum method is a helper function that draws a simple console-based spectrum analyzer using * characters. The height of each bar represents the magnitude of the corresponding frequency bin.

Integrating with UI Frameworks

These examples use basic console output for simplicity. To integrate SoundFlow’s visualizers with a GUI framework (like WPF, WinForms, Avalonia, or MAUI), you’ll need to:

  1. Create an IVisualizationContext implementation: This class will wrap the drawing primitives of your chosen UI framework. For example, in WPF, you might use DrawingContext methods to draw shapes on a Canvas.
  2. Update the UI from the VisualizationUpdated event: In the event handler, trigger a redraw of your UI element that hosts the visualization. Make sure to marshal the update to the UI thread using Dispatcher.Invoke or a similar mechanism if the event is raised from a different thread.
  3. Call the Render method: In your UI’s rendering logic, call the Render method of the visualizer, passing your IVisualizationContext implementation.

Example (Conceptual WPF):

// In your XAML:
// <Canvas x:Name="VisualizationCanvas" />
// In your code-behind:
public partial class MainWindow : Window
{
private readonly WaveformVisualizer _visualizer;
public MainWindow()
{
InitializeComponent();
// ... Initialize AudioEngine, SoundPlayer, etc. ...
_visualizer = new WaveformVisualizer();
_visualizer.VisualizationUpdated += OnVisualizationUpdated;
// ...
}
private void OnVisualizationUpdated(object? sender, EventArgs e)
{
// Marshal the update to the UI thread
Dispatcher.Invoke(() =>
{
VisualizationCanvas.Children.Clear(); // Clear previous drawing
// Create a custom IVisualizationContext that wraps the Canvas
var context = new WpfVisualizationContext(VisualizationCanvas);
// Render the visualization
_visualizer.Render(context);
});
}
// ...
}
// IVisualizationContext implementation for WPF
public class WpfVisualizationContext : IVisualizationContext
{
private readonly Canvas _canvas;
public WpfVisualizationContext(Canvas canvas)
{
_canvas = canvas;
}
public void Clear()
{
_canvas.Children.Clear();
}
public void DrawLine(float x1, float y1, float x2, float y2, Color color, float thickness = 1f)
{
var line = new Line
{
X1 = x1,
Y1 = y1,
X2 = x2,
Y2 = y2,
Stroke = new SolidColorBrush(System.Windows.Media.Color.FromArgb((byte)(color.A * 255), (byte)(color.R * 255), (byte)(color.G * 255), (byte)(color.B * 255))),
StrokeThickness = thickness
};
_canvas.Children.Add(line);
}
public void DrawRectangle(float x, float y, float width, float height, Color color)
{
var rect = new Rectangle
{
Width = width,
Height = height,
Fill = new SolidColorBrush(System.Windows.Media.Color.FromArgb((byte)(color.A * 255), (byte)(color.R * 255), (byte)(color.G * 255), (byte)(color.B * 255)))
};
Canvas.SetLeft(rect, x);
Canvas.SetTop(rect, y);
_canvas.Children.Add(rect);
}
}

Remember to adapt this conceptual example to your specific UI framework and project structure.

Audio Editing & Persistence

This new section covers the powerful non-destructive editing engine introduced in SoundFlow v1.1.0. See the dedicated Editing Engine & Persistence Guide for comprehensive details and examples.

Key features demonstrated in the guide:

  • Creating Compositions, Tracks, and AudioSegments.
  • Manipulating segment properties: SourceStartTime, SourceDuration, TimelineStartTime.
  • Using AudioSegmentSettings: volume, pan, reverse, looping, fades (FadeCurveType).

A simple example of creating a composition and adding a segment:

using SoundFlow.Abstracts;
using SoundFlow.Backends.MiniAudio;
using SoundFlow.Components;
using SoundFlow.Enums;
using SoundFlow.Providers;
using SoundFlow.Editing; // New namespace
using System;
using System.IO;
using System.Threading.Tasks;
namespace BasicComposition;
internal static class Program
{
private static async Task Main(string[] args)
{
// Initialize the audio engine.
using var audioEngine = new MiniAudioEngine(48000, Capability.Playback, channels: 1);
// Create a composition
var composition = new Composition("My First Song") { SampleRate = 48000, TargetChannels = 1 };
// Create a track
var track1 = new Track("Vocals");
composition.AddTrack(track1);
// Load an audio file for a segment
// Ensure Adam.wav is in your output directory or provide a full path
string adamWavPath = Path.Combine(AppDomain.CurrentDomain.BaseDirectory, "Adam.wav");
if (!File.Exists(adamWavPath)) {
Console.WriteLine($"Audio file not found: {adamWavPath}");
return;
}
var adamProvider = new StreamDataProvider(File.OpenRead(adamWavPath));
// Create an audio segment
// Play from 0s of source, for 5s duration, place at 1s on timeline
var segment1 = new AudioSegment(adamProvider,
TimeSpan.Zero,
TimeSpan.FromSeconds(5),
TimeSpan.FromSeconds(1),
"Adam_Intro",
ownsDataProvider: true);
// Optionally, modify segment settings
segment1.Settings.Volume = 0.9f;
segment1.Settings.FadeInDuration = TimeSpan.FromMilliseconds(500);
track1.AddSegment(segment1);
// Create a SoundPlayer for the composition
var compositionPlayer = new SoundPlayer(composition); // Composition itself is an ISoundDataProvider
Mixer.Master.AddComponent(compositionPlayer);
compositionPlayer.Play();
Console.WriteLine($"Playing composition '{composition.Name}' for {composition.CalculateTotalDuration().TotalSeconds:F1}s... Press any key to stop.");
Console.ReadKey();
compositionPlayer.Stop();
Mixer.Master.RemoveComponent(compositionPlayer);
composition.Dispose();
}
}

For this example to run, you’d need an Adam.wav file. The new SoundFlow.Samples.EditingMixer project contains sample audio files and more complex editing examples.

These tutorials and examples provide a starting point for using SoundFlow in your own audio applications. Explore the different components, modifiers, analyzers, and visualizers to create a wide range of audio processing and visualization solutions. Refer to the Core Concepts and API Reference sections of the Wiki for more detailed information about each class and interface.