Getting Started¶
Requirements¶
- Python >= 3.10
- numpy
- C++17 compiler (for building from source)
- CMake >= 3.15 (for building from source)
Installation¶
From PyPI¶
From source¶
git clone https://github.com/shakfu/nanodsp.git
cd nanodsp
uv sync # install dependencies + build extension
uv run pytest # run tests
uv build # build wheel
Use make help for additional targets (build, test, lint, format, typecheck, qa, coverage, etc.).
Basic usage¶
Loading audio¶
from nanodsp.buffer import AudioBuffer
# From file
buf = AudioBuffer.from_file("input.wav")
# Generate test signals
buf = AudioBuffer.sine(440.0, frames=44100, sample_rate=44100)
buf = AudioBuffer.noise(channels=2, frames=44100)
buf = AudioBuffer.impulse(frames=1024)
buf = AudioBuffer.zeros(channels=1, frames=4096)
Processing audio¶
from nanodsp.effects import filters, dynamics
# Direct function calls
filtered = filters.lowpass(buf, cutoff_hz=1000.0)
compressed = dynamics.compress(filtered, threshold=-18.0, ratio=4.0)
# Pipeline style
result = (
buf
.pipe(filters.highpass, cutoff_hz=80.0)
.pipe(filters.lowpass, cutoff_hz=12000.0)
.pipe(dynamics.compress, threshold=-18.0)
)
Writing output¶
Channel operations¶
# Mono/stereo conversion
mono = buf.to_mono("mean")
stereo = mono.to_channels(2)
# Channel access
left = buf.channel(0) # 1D numpy view
channels = buf.split() # list of mono AudioBuffers
# Stack channels
merged = AudioBuffer.concat_channels(left_buf, right_buf)
Arithmetic¶
Architecture¶
All DSP functions accept and return AudioBuffer objects. The processing pipeline is:
- AudioBuffer wraps a 2D float32 numpy array
[channels, frames]with metadata - Python functions validate parameters and delegate to C++ bindings
- C++ layer processes audio with GIL released for thread safety
- Result is returned as a new AudioBuffer (functions are non-mutating)