- cross-posted to:
- anime@lemmy.ml
- cross-posted to:
- anime@lemmy.ml
They are also AI dubbing show that already have a dub: https://xcancel.com/Pikagreg/status/1994654475089555599
They are also AI dubbing show that already have a dub: https://xcancel.com/Pikagreg/status/1994654475089555599
According to the beta testers, and the Internet, listeners abhorred the LLM localization & actual tone-deaf Speech audio dubbing. Keeping the original dubbings is simply what folks want, esp. if it’s labeled abridged.
At the least you are aware why this /c/ prefers subs, because it is that much cheaper and errorless to output.
Yes, at its current state. Will it stay that way? The tech companies are burning cash in attempts to make it not so. My hunch says even Vocaloid-tier AI dubbing will be enough for a large sector of the audience. Then the human vs. AI dubbing debate could be analogous to debates between lossy (more accessible) vs. lossless (higher quality) audio.
Now, LLM localization is the greater challenge. I highly doubt those, including the classic machine-learning models, can reach N1-level localization quality.
The only thing funny about mentioning Vocaloid is the fact that Vocaloid synthesis has to be manually pitched, tempod, and toned🤣. Glad you honestly believe capitalists want to invest more on disqualifying tone deafening pitchless speech waveforms.
But please, never stop supporting espeak!
espeaks looks pretty cool. Thanks for sharing.