It just wasn't a priority. It doesn't impact the bottom line so it didn't get a significant amount of investment
Ask Lemmy
A Fediverse community for open-ended, thought provoking questions
Please don't post about US Politics. If you need to do this, try !politicaldiscussion
Rules: (interactive)
1) Be nice and; have fun
Doxxing, trolling, sealioning, racism, and toxicity are not welcomed in AskLemmy. Remember what your mother said: if you can't say something nice, don't say anything at all. In addition, the site-wide Lemmy.world terms of service also apply here. Please familiarize yourself with them
2) All posts must end with a '?'
This is sort of like Jeopardy. Please phrase all post titles in the form of a proper question ending with ?
3) No spam
Please do not flood the community with nonsense. Actual suspected spammers will be banned on site. No astroturfing.
4) NSFW is okay, within reason
Just remember to tag posts with either a content warning or a [NSFW] tag. Overtly sexual posts are not allowed, please direct them to either !asklemmyafterdark@lemmy.world or !asklemmynsfw@lemmynsfw.com.
NSFW comments should be restricted to posts tagged [NSFW].
5) This is not a support community.
It is not a place for 'how do I?', type questions.
If you have any questions regarding the site itself or would like to report a community, please direct them to Lemmy.world Support or email info@lemmy.world. For other questions check our partnered communities list, or use the search function.
Reminder: The terms of service apply here too.
Partnered Communities:
Logo design credit goes to: tubbadu
It was not impossible to implement, no. We can only guess as to why it wasn't implemented, but if I had to guess, I'd say it's simply not been prioritized.
A couple reasons, I think:
-
AI dubbing: this makes it way easier for YouTube to add secondary dubbed tracks to videos in multiple languages. Based on the Google push to add AI into everything, including creating AI related OKR's, that's probably a primary driver. Multiple audio tracks is just needed infrastructure to add AI dubbing.
-
Audio description: Google is fighting enough antitrust related legal battles right now. The fact that YouTube doesn't support audio description for those of us who are blind has been an issue for a long time, and now that basically every other video streaming service supports it, I suspect they're starting to feel increased pressure to get on board. Once again, multiple audio tracks is needed infrastructure for offering audio description.
Wdym "only now"? It's been a thing for like a year now
It's been therd for at least a year, I feel like. My pc default language is Spanish so on some videos it defaults to a dub
I actively had to set my region and langzage to english so YT doesnt auto translate video titles into my native language...
It can't be that hard to implement a switch to opt-out of it, can it?
Since I haven't seen any of the comments mention this yet...
I think the big reason is storage/bandwidth.
Digital audio is an interesting form of media because the size of an audio file is determined almost entirely by 1) how long it is and 2) the bitrate/quality and has a lot less to do with what the actual content is. Therefore, an audio track of a video that contains dialogue and music is pretty much the same size as one that only contains music. So, if you were to, for example, separate dialogue and music of a video into two tracks to allow a user adjust the volume of either independently of the other (an amazing user experience IMO) the storage size (and bandwidth usage) of the audio virtually doubles despite no "additional" content being added.
Multiple audio track is actually something I've wanted for forever, especially for watching stream on Twitch. But I think it's a pretty hefty burden to place on the service, especially if a lot of people aren't even going to use or notice it.