I'm still struggling with this one.
As an example, I compare the results for the same DAC with 2 different ADCs, and compared results from my "home made " SINAD to the "Official" SINAD
- 2022-07-18 12_52_44-AES-17 Notch debug.png (109.18 KiB) Viewed 33878 times
(X axis are Vrms - sorry for that, it was just a "quick and dirty" plot)
There is some coherent results:
When DAC and ADC are in the same machine (the RME), there is no difference of result between both methods, which is to be expected.
When the ADC is separate, we see a small gap... until it suddenly explodes at between -3.0dBFS and -2.5dBFS...
So what's going on at -2.5dBFS ?
- 2022-07-18 17_45_07-MI Notch -2.5dBFS.png (158.31 KiB) Viewed 33878 times
I compared with REW (
AES-17 notch)
- 2022-07-18 17_51_27-Cosmos ADC PCM32_384 1 on ASIO4ALL v2 at 48 kHz, 32.768-point Spectrum, Dolph-Ch.png (189.42 KiB) Viewed 33878 times
The main difference is on the noise.
You see my bands here:
;A-Frequency Band (Hz), A-RMS (dBFS)
706.055Hz ~ 1410.64Hz, -2.61886
1984.86Hz ~ 2006.84Hz, -145.616
2982.42Hz ~ 3004.39Hz, -129.26
3979.98Hz ~ 4001.95Hz, -137.651
4977.54Hz ~ 4999.51Hz, -139.163
5975.1Hz ~ 5997.07Hz, -148.723
6972.66Hz ~ 6994.63Hz, -146.379
7970.21Hz ~ 7992.19Hz, -150.739
8967.77Hz ~ 8989.75Hz, -144.307
9965.33Hz ~ 9987.3Hz, -149.503
10962.9Hz ~ 10984.9Hz, -147.226
11960.4Hz ~ 11982.4Hz, -151.459
12958Hz ~ 12980Hz, -150.691
13955.6Hz ~ 13977.5Hz, -148.718
14953.1Hz ~ 14975.1Hz, -144.512
15950.7Hz ~ 15972.7Hz, -148.078
16948.2Hz ~ 16970.2Hz, -151.32
17945.8Hz ~ 17967.8Hz, -151.996
18943.4Hz ~ 18965.3Hz, -148.199
19940.9Hz ~ 19962.9Hz, -150.035
20.5078Hz ~ 19999.5Hz, -2.61886
The last one being full bandwidth.
(By the way: is the upper frequency bin included in the result for the band or not ? It seems it's not.
As an example 11960.4Hz ~ 11960.4Hz gives me -[infinity] result.)
I compute noise level as follows:
10*LOG10(
10^([fnRMS_A(EU)_Array[20]]/10)
-10^([fnRMS_A(EU)_Array[0]]/10)
-10^([fnRMS_A(EU)_Array[1]]/10)
-10^([fnRMS_A(EU)_Array[2]]/10)
-10^([fnRMS_A(EU)_Array[3]]/10)
-10^([fnRMS_A(EU)_Array[4]]/10)
-10^([fnRMS_A(EU)_Array[5]]/10)
-10^([fnRMS_A(EU)_Array[6]]/10)
-10^([fnRMS_A(EU)_Array[7]]/10)
-10^([fnRMS_A(EU)_Array[8]]/10)
-10^([fnRMS_A(EU)_Array[9]]/10)
-10^([fnRMS_A(EU)_Array[10]]/10)
-10^([fnRMS_A(EU)_Array[11]]/10)
-10^([fnRMS_A(EU)_Array[12]]/10)
-10^([fnRMS_A(EU)_Array[13]]/10)
-10^([fnRMS_A(EU)_Array[14]]/10)
-10^([fnRMS_A(EU)_Array[15]]/10)
-10^([fnRMS_A(EU)_Array[16]]/10)
-10^([fnRMS_A(EU)_Array[17]]/10)
-10^([fnRMS_A(EU)_Array[18]]/10)
-10^([fnRMS_A(EU)_Array[19]]/10)
)
And SINAD as
-10*LOG10(
10^([fnRMS_A(EU)_Array[20]]/10)
-10^([fnRMS_A(EU)_Array[0]]/10)
)
+[fnRMS_A(EU)_Array[20]]
(Could do with [fnRMS_A(EU)_Array[0]], but result is the same)
Initially, I was using the "Official" THD value to compute the sum of distortions level, but a you see that makes little difference (0.1dB distortion difference)
EDIT: Actually, it does !
Writing the above I suddenly had a doubt and wanted to check.
At those very low levels, a tiny difference in distortion may have a big impact on the result.
That doesn't explain it all, for sure. But still, it makes my quest even more difficult, I'm afraid.
Any idea what I do wrong here ?