0 Followers
0 Following
1 Posts
A.K.A u/hucifer
I also wonder whether or not grapheneos, or open source Linux OSs in general, will face any repercussions for failing to comply to these regulations due to the relatively low user count.
Putting Hitler + WW2 alongside US regime change Iraq and Iran is an odd comparison, ngl.
  • 14 dead US service members
  • A couple of thousand dead civilians in Iran and Lebanon

Same here; Upgraded the old media/gaming PC in the living room back in October.

The same 32GB kit I bought then for $90 is now $430. Utterly insane.

Yeah, it is. Vorbis is the actual codec.

If we’re talking free tier Spotify, then it could actually be due to the bitrate (96kbps OGG vorbis, IIRC). However, if you’re a premium subscriber then the standard bitrate is 160kbps, which is definitely not audible to 99.99% of people.

However, after much testing, I found that a noticeable audible difference between a local file and the same song on a streaming service is almost always due to either a loudness differential or because the two tracks come from different masters.

I do the same, as it happens, so I won’t argue with you.

As for “why care?”, I’d say it’s about making informed decisions and not spending money unnecessarily in the pursuit of genuinely better sound quality.

The thing is, dynamic range compression and audio file compression are two entirely separate things. People often conflate the two by thinking that going from wav or flac to a lossy file format like mp3 or m4a means the track becomes more compressed dynamically, but that’s not the case at all. Essentially, an mp3 and a flac version of the same track will have the same dynamic range.

And yes, while audible artifacts can be a thing with very low bitrate lossy compression, once you get to128kbps with a modern lossy codec it becomes pretty much impossible to hear in a blind test. Hell, even 96kbps opus is much audibly perfect for the vast majority of listeners.

Oh, 100%. I actually tested this by recording bit perfect copies from different streaming services and comparing them with audacity.

I found that they only way to hear a difference between the same song played on two different platforms was 1) if there was a notable difference in gain or 2) if they were using two different masters for the same song. If two platforms were using the same master version, they were impossible to tell apart in an ABX test.

All of this is to say that the quality of the mastering is orders of magnitude more important than whether or not a track is lossy or lossless, as far as audible audio quality goes.

These days I mostly see the placebo audio arguments in streaming service and FLAC/lossless encode fanboys.

The clamour for lossless/high-res streaming is the audiophile community in a nutshell. Literally paying more money so your brain can trick into thinking it sounds better.

Like many hobbies, it’s mainly a way to rationalize spending ever increasing amounts on new equipment and source content. I was into the whole scene for a while, but once I had discovered what components in the audio chain actually improve sound quality and which don’t, I called it quits.