fix: use rounding for float-to-integer conversions
This PR replaces truncating casts with proper rounding in float-to-integer sample conversions to eliminate systematic bias and nonlinear distortion.
Problem
The current implementation uses truncating casts (e.g. as i16), which creates two issues:
-
Nonlinear distortion: All signal values in the interval (-1.0, 1.0) map to zero, creating an output bin twice as large as any other integer value. This violates the uniform quantization assumption and introduces harmonic distortion.
-
Systematic bias towards zero: Small signals that should map to ±1 are instead lost to zero, introducing DC bias and reducing effective dynamic range by about 8 dB.
One publication that documents this is Dannenberg's "Danger in Floating-Point-to-Integer Conversion" letter to Computer Music Journal in 2002, which warns against truncation in audio applications.
Solution
Replace (s * scale) as {integer} with (s * scale).round() as {integer} for float-to-integer conversions.
Before (truncation):
s to_i16 { (s * 32_768.0) as i16 }
// 0.1 * 32768 = 3276.8 → 3276 (truncated)
// 0.00002 * 32768 = 0.65536 → 0 (small signal lost)
After (rounding):
s to_i16 { (s * 32_768.0).round() as i16 }
// 0.1 * 32768 = 3276.8 → 3277 (rounded)
// 0.00002 * 32768 = 0.65536 → 1 (small signal preserved)
The performance impact is minimal, because LLVM generates efficient code for round() intrinsics with dedicated instructions on most targets.
The failing build is unrelated and already present on master since https://github.com/RustAudio/dasp/commit/cd8d8931ba6ff589d5c6334641641c116e09fe58.
➡️ PR #192
Rebased and passing the CI now.