Blog RSS Feed

Back to blog index

The (Slowly) Improving Quality of Web Closed Captions

There’s a growing trend on social media and sites like Reddit and Quora to showcase the captioning errors from television and numerous online platforms. As accessibility laws tighten and the quality standards for captioning on broadcast become more rigorous, how do these bloggers have so much fuel for their posts on captioning errors? It is a simple question with many complicated answers.

Live television programming is captioned real-time either by machines or humans working with a stenotype machine (like those used in courtrooms), and thus tend to lag slightly behind and, inevitably, will include some paraphrasing and errors. While the Federal Communication Commission requires American television stations’ post-production captions to meet certain standards, the Internet is still vastly unregulated. Video sharing websites like YouTube have struggled to provide accessible captions. Despite YouTube’s recent efforts to improve accessibility, their captions continue to disappoint viewers, especially those of the deaf and hard of hearing community.

In a 2014 The Atlantic article called “The Sorry State of Closed Captioning,” Tammy H. Nam explains why machines cannot create the same experience humans can.  She posits, “Machine translation is responsible for much of today’s closed-captioning and subtitling of broadcast and online streaming video. It can’t register sarcasm, context, or word emphasis.” By using machines instead of human writers and editors, sites like YouTube are not providing the same viewing experience to the deaf and hard of hearing as they are to their other patrons. Humans can understand which homophone to use based on context. There is an enormous difference between the words soar and sore, air and heir, suite and sweet. Humans can also determine when a noise is important to the plot of a story, and thereby include it in the captions so that a non-hearing viewer won’t miss critical details. In the same Atlantic article, deaf actress Marlee Matlin says, “I rely on closed captioning to tell me the entire story…I constantly spot mistakes in the closed-captions. Words are missing or something just doesn’t make sense.” Accessible closed captions should follow along exactly with the spoken dialogue and important sounds so that viewers are immersed in the story. Having to decipher poor captions takes the viewer out of the flow of the story and creates a frustrating experience.

Youtube created its own auto caption software for its creators to use in 2010. The software is known for its incomprehensible captions. Deaf YouTuber and activist Rikki Poynter made a video in 2015 highlighting the various ways in which YouTube’s automatic captions are inaccessible. She wrote a 2018 blog post explaining her experience of the software, “Most of the words were incorrect. There was no grammar. (For the record, I’m no expert when it comes to grammar, but the lack of punctuation and capitalization sure was something.) Everything was essentially one long run-on sentence. Captions would stack up on each other and move at a slow pace.” For years, Rikki and other deaf and hard of hearing YouTube users had to watch videos with barely any of the audio accurately conveyed. Although her blog post highlights the ways in which YouTube’s automatic captions have improved since 2015, she writes, “With all of that said, do I think that we should choose to use only automatic captions? No, I don’t suggest that. I will always suggest manually written or edited captions because it will be the most accurate. Automatic captions are not 100% accessible and that is what captions should be.” The key word is accessible. When captions do not accurately reflect spoken words in videos, television shows, and movies, the stories and information are inaccessible to the deaf and hard of hearing. Missing words, incorrect words, poor timing, captions covering subtitles or other important graphics all take the viewer out of the experience or leave out critical information to fully understand and engage with the content. Until web resources like YouTube take their deaf and hard of hearing viewer’s complaints seriously, they will continue to alienate them.

So, what can we do about poor web closed captioning? Fortunately, the Internet is also an amazing tool which allows consumers and users to have a voice in the way they experience web content. Deaf and hard of hearing activists like Marlee Matlin, Rikki Poynter, and Sam Wildman have been using their online platforms to improve web closed captions. Follow in their footsteps and use the voice that the web give you. Make a YouTube video like Rikki Poynter or write a blog post like Sam Wildman’s post, “An Open Letter to Netflix Re:Subtitles.” You can buy a T-shirt to support Rikki Poynter’s #nomorecraptions campaign. The Internet is a powerful platform in which large companies like Google can hear directly from their consumers. If you would like to see the quality of closed captions on the web improve, use your voice.

Otherwise, you’ll continue to see memes like this one…

youtube caption fail meme

  • Share this on Google+0
  • Share this on Linkedin0
  • 0
This entry was posted in Accessibility Rulings, Closed Captioning, FCC Guidelines, Internet Video. Bookmark the permalink. Follow any comments here with the RSS feed for this post. Post a comment or leave a trackback: Trackback URL.

Post a Comment

Your email is never published nor shared. Required fields are marked *

*
*

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>