Live translation subtitles on live television programmes is probably the most challenging form of subtitling – combining as it does the challenges of real-time subtitle creation and the skills needed to provide a good translation.
Same language subtitling of live TV shows for the deaf and hard-of-hearing is quite commonplace – however the results are not comparable to those of prepared subtitles. There is a noticeable delay between speech and subtitle – making it very difficult for those with residual hearing to use subtitles just to fill in the gaps (as they would with recorded programmes). There is also the inevitable occassional error with live subtitles, and as long as these are not too frequent the viewer is generally forgiving.
Delay and accuracy can both be improved by preparing subtitle texts in advance (e.g. from a script or video preview), leaving the operator to merely deliver (or cue) each subtitle in sync with the live presentation.
Where truly live subtitles must be provided, the operator can listen to the source language and repeat what is heard (either via touch-typing or using “re-speaking” with a speech recognition engine).
The latency of text input associated with same-language live subtitling (i.e. the delay between audio and subtitle) is much increased when translation is involved. A translator must listen to more of the original audio to produce a translation before voicing or typing the words.
Many facilities are wary of over-committing in using subtitles for live translation – too much scope for error, and too much delay.
Some customers use an ultra-fast turnaround method instead (with several people):
- Delay the video by enough time to prepare a good translation;
- interpreter trained in respeaking listens to the live audio and creates the base subtitle text (using WinCAPS)
- second operator checks and corrects the recognised text for accuracy and quality of translation
- third operator listens to the delayed programme feed and manually cues each subtitle in sync with the presentation
A variation on this technique might involve separating the roles of interpreter and text entry (particularly if typing or stenography is used rather than respeaking). Another variation would be to adjust the video delay to match the average subtitle creation time – allowing an approximate synchronisation between audio and subtitle without the need for the final operator in the chain to “cue” subtitles manually.