Recent comments in /f/headphones

giant3 t1_jcu623p wrote

> negative about aptx

It is based on truth. This AptX vs Others test shows that standard AptX and SBC @ 328 kbps have only a 0.6 dB difference in distortion. AptX HD is better, but it is using 529 kbps. SBC or SBC XQ at that bit rate would perform similarly.

> AAC for instance is better than Aptx?

Absolutely. Any day of the week. The only codec that can beat AAC is Opus. There is extensive scientific literature on audio codecs. People have spent their entire career on audio codecs and their day job is evaluating the codecs both subjectively and objectively.

AAC & Opus are superior to every other lossy codec out there.

2

No-Context5479 t1_jctgabm wrote

Yeah I listen to a vast amount of genres who have their own "standard" industry LUFS. Classical tends to be the most dynamic rangefilled genre so their LUFS never gets beyond -14... Pop and other genres like electronic music who tend to crank their compression sometimes. A sudden change in songs is very audible and sometimes jarring so I never want to be overly distracted so smooth transitions all around... But I get you. Preferences, preferences 🤝🏾

1

bearstakemanhattan t1_jctfl6z wrote

YouTube and Spotify have different loudness normalization standards. Spotify normalizes to roughly -14 LUFS whereas YouTube normalizes to -12 iirc. I noticed this too but if you level match it doesn't sound different. Could also have to do with the psychology of having a music video to go with the music but I have no evidence to back that theory.

1

No-Context5479 t1_jctf04e wrote

That was debunked years ago by Spotify themselves and people verified that with ABX tests.

So I advice putting it back on and using Quiet or Normal. Really makes it easy on the ears for long listening sessions. Loud is not it at all for me... Cranks the volume which I'm trying to avoid anything beyond 75dB...

Also all good... We all are still learning in this space... Learned something new the other day about Vents and IEMs which I knew nothing of. 👍🏾

1

No-Context5479 t1_jctdr7u wrote

u/thebirdman9999, Huh.... Normalise doesn't impact dynamic range if it's set to Normal or Quiet... Where did you here such lies... Normalisation doesn't compress the file it just lowers the volume to a set standard LUFS across board so worse and more compressed songs don't sound jarring playing right after good dynamic range having songs that have acceptable LUFS.

Please don't go around saying Normalisation reduces dynamic range.

Lemme link this wall of text:

Spotify goes into detail on this page - (https://artists.spotify.com/faq/mastering-and-loudness#can-users-adjust-the-levels-of-my-music) about what each setting does.

> When we receive your audio file, we transcode it to delivery formats Ogg/Vorbis and AAC. At the same time, we calculate the loudness level and store that information as metadata in the transcoded formats of your track.

> Playback levels are not adjusted when transcoding tracks. Tracks are delivered to the app with their original volume levels, and positive/negative gain compensation is only applied to a track while it’s playing. This gives users the option to adjust the Loudness Normalization if they want to.

> Negative gain is applied to louder masters so the loudness level is at ca - 14 dB LUFS. This process only decreases the volume in comparison to the master; no additional distortion occurs.

> Positive gain is applied to softer masters so that the loudness level is at ca - 14 dB LUFS. A limiter is also applied, set to engage at -1 dB (sample values), with a 5 ms attack time and a 100 ms decay time. This will prevent any distortion or clipping from soft but dynamic tracks.

> The gain is constant throughout the whole track, and calculated to match our desired output loudness level.

> Premium users can choose between the following volume normalization levels in their app settings:

> Loud - equalling ca -11 dB LUFS (+6 dB gain multiplied to ReplayGain)

> Normal (default) - equalling ca -14 dB LUFS (+3 dB gain multiplied to ReplayGain)

> Quiet - equalling ca - 23 dB LUFS (-5 dB gain multiplied to ReplayGain) This is to compensate for where playback isn’t loud enough (e.g. in a noisy environment) or dynamic enough (e.g. in a quiet environment).

Emphasis mine -- basically Spotify's system is just normalization in most cases, and only behaves as a compressor to prevent clipping. I suspect if you were to set it to the "Loud" setting you'd hit that behavior on some tracks but in general their approach seems like it'd be avoided. Personally I'd recommend users enable audio normalization and use the "Quiet" or "Normal" settings.

To directly answer your questions:

> Does Spotify's volume normalization use compression?

Yes, only if clipping is detected, and only for the duration that clipping occurs. Since tracks are lowered in volume when this feature is turned on this should be exceedingly rare that this happens.

> Why does Spotify (and the internet) say that dynamic range is most preserved on quiet normalization?

Since the compression only kicks in if clipping (or near clipping) is detected it's fairly unlikely to kick in on the normal setting. It's basically guaranteed to not happen on the quiet setting. It might kick in on some cases on the loud setting, but from their documentation it sounds like this might only happen if a track was super quiet to begin with.

So it only compresses if said track was mastered terribly with extremely loud LUFS... So a well mastered song is never hit with a dynamic range penalty... This is good as it has helped us move away from the era people used to brickwall their audio so their CDs sounded louder than other acts, thinking that will lead to more plays of their CDs. Nowadays if you fuck up your own mix, you're gonna get a volume level the same as songs mastered to tons of dynamic range and your track will end up sounding trash....

TLDR: Sorry for the long winding text but no Spotify doesn't touch the Dynamic range of a song. They just lower the volume so all songs on the platform have the same volume.

1

thebirdman9999 t1_jctcopq wrote

The * normalize volume * should be set to off to get the best quality out of spotify.

using the normalize volume setting is not great for the dynamic range.

Another thing i find strange is that you say that youtube sounds louder, im a bit confuse by that, if i dont touch the volume and i compare the same songs between youtube and spotify, spotify is louder for sure. I would guess that the normalize volume setting has probably something to do with that, or another setting maybe.

1

thebirdman9999 t1_jct90wz wrote

the high freqs on youtube are less defined and clear than on spotify so the sound on youtube is in other words... warmer/smoother. Since its smoother, you probably can turn up the volume a bit more resulting in a wider stage, simple as that.

thats my observation when i compared the same songs between spotify premium and * regular* youtube music videos of them

i do use youtube to listen to music sometimes when i want to relax late at night (: even if i can have higher quality on spotify.

1

Thetruthira OP t1_jct58cm wrote

Reply to comment by Shoo--wee in Aux cable extension by Thetruthira

This exactly the one I bought then returned it, sounds great actually but it wont go all in the female port causing it to stutter or even cut out the sound completely (every 5 seconds!!) for me,definitely won’t recommend it for desktop use it is great for phones tho.

1

Redracerb18 t1_jcs8wed wrote

Personally I haven't looked into them specifically.

What I was talking about was a lot bigger.

So most audio files either have stereo or mono sound mixing. Either left or right channels or just the same sound on both ears. Perfect example of channel mapping and sound stage is Earth Wind and Fire's September where you can hear the bongo man in the far front right speaker. Or Frank Zappa's Album Apostrophe where you can hear Frank move around the room. Or even Steely Dan where every song is almost perfect. For a good long time you where dealing with mono sound on vinyl. While you had the ability to play stereo most song where recorded for just one speaker, your Center Channel. What I want is to be able to play would song through an AI that will then automatically assign vocal arrangements and tones to either the left or right channels respectively. If the drummer is on the right I want to hear the drummer in the right speaker. Or in a duet where you have two people on one mic I want to have each person's voice on each respected side.

That's the theory, in practice you have to analyze every tonal shift over every note of the song. Assign a channel to that instrument then repeat for every other instrument until you get a file where every instrument plays independently. Then be able to combine selective channels to several primary channels life your front, your left, you're right, both forward and back mapping respectively. For every instrument it get more complicated.

Most bands have a standard arrangement where the drummer is on the back, vocals are in the center front. Bass can be on either left or right. Don't forget the piano which could also be center stage depending on the artist or in the back right. It's easier with multiple microphones where you can already have different audio channels but when you start talking about only one mic like the "Wall of Sound", it gets more complicated.

The Advantage to AI is that it learns how to do this over time. It would get fed more data for more songs and over time you would have older songs be rerun through the AI that could improve them even further. I know I am in essence talking about a massive shift in mastering recordings. I know that I could be talking about completely overwriting an artist original Vision for a song. This would be an AI that would completely shift the music industry into either adopting better sound stage or it would become a massive issue with copywriting works and IP ownership. I'm not sure at what point this is transformation of the art or just plagiarism. All I know is that this would be one of the biggest changes to music in world history, possibly more then recorded audio in the first place.

1

smalg2 t1_jcr68oa wrote

> Do you think Sony, Sennheiser will sell lots of their devices without?

Probably not, and I suspect that was kind of the point. We could simply have increased SBC's bitrate and enjoyed high quality music with our existing SBC gear, the end. But instead, a company saw the money-making potential of this situation, bought the rights to an audio codec designed in the 80s, and pushed for it to be used with Bluetooth by marketing it as "HD audio" (which it wasn't really, at least for the original non-HD aptX). Headset makers got to sell more headsets ("Oh you want to use this fancy new codec? A shame it doesn't work with your current headset, you'll need to buy a new one. Too bad!" - sad Pikachu face) creating more electronic waste in the process, Qualcomm got to collect licensing fees from millions of encoders and decoders around the world, and consumers obviously got to pay for all this (who else?) Other companies saw this and joined the game with their own codecs, and the Bluetooth audio landscape is now this huge mess we all know, with a plethora of codecs competing against each other, and an endless list of platform-specific incompatibilities and limitations. All this when the solution was right there from the start: SBC...

I'm not saying SBC doesn't have room for improvement, especially regarding latency, but it was designed to be capable of much more than what we ended up using it for. It was supposed to support adaptive bitrate for example, but AFAIK this was never implemented correctly.

So yes my opinion of aptX is pretty negative, because IMO this is a typical case of consumers getting abused to make corporations even more money, when there were some much more elegant (but less lucrative) solutions available. Bluetooth audio could have been so much better... Oh well, rant over.

3

dimesian t1_jcqybce wrote

The better one is that which you enjoy the most. Just last night I listened to a much loved track on my old iPod that I haven't used for several years, I think I bought it on iTunes so its possibly a 128kbps track. I then tried the hi-res version on Tidal, didn't enjoy it as much though it didn't sound bad. I tried it on YouTube music (too much free time) and it sounded great, someone else might prefer the hi-res version.

1