Real-Time and Hybrid News Captioning for Live Programmes Including Re-Speaking
- Real-time text input (speech recognition, Stenograph, Velotype or standard keyboard)
- Automatic text segmentation
- Download of run orders and scripts from newsroom system
- Preparation of transcripts for video packages
- Re-use of live subtitles in a repeat broadcast
- Choice of presentation style (block or scrolling, open subtitles, teletext, or closed-captions)
Dragon NaturallySpeaking® integrates easily with WINCAPS Q-LIVE, with the essential functions being available directly within the Q-LIVE UI. Q-Live adds further value by enforcing a house style, interpreting spoken style control commands, smoothing the delivery rate and providing for keyboard intervention if required. (Other real-time speech recognition engines may also be supported using a standard keyboard emulation interface.)
Stenograph and Velotype inputs are also supported (please ask for details), for use where trained operators for these fast writing devices are available. Alternatively the standard PC keyboard can be used, supported by topic-based shortforms to improve speed and accuracy.
The VOCAB FINDER utility allows speech recognition users to research chosen parts of the internet for new vocabulary that may be needed in subtitles. This can be particularly helpful for finding proper nouns, such as place names and names of people, and acronyms related to a specific event or news story, for example.
The utility makes is easy for users to then update their Dragon profile with the new terms.
Dragon NaturallySpeaking® is a registered trademark of Nuance, Inc. and is used here under license.
The short video below demonstrates re-speaking using Q-Live
Use news scripts to improve accuracy and reduce delay
News presenter with subtitles WINCAPS Q-NEWS interfaces with a newsroom system (such as iNews or ENPS) to download and track user-selected runorders and story scripts. It creates a new, multi-user WINCAPS file for each monitored runorder, and maintains a copy of the running order and story text automatically.
Automatic segmentation splits the text into readable subtitles based on linguistic and geometric rules that can be tuned for each language.
Changes to the running order and story scripts are continuously tracked and reflected automatically in the WINCAPS subtitle file. Subtitle editors can adjust the WINCAPS version of the script at any time, any subsequent conflicting update from the newsroom version then flags the story in WINCAPS and allows the subtitler to
compare the two versions easily.
At the time of broadcast the subtitler just listens to the audio and cues each subtitle in time with the programme.
Video packages referenced by the newsroom system can also be used to prepare subtitles in advance of transmission – timecoded if there’s enough time.
Using newsroom scripts in this way helps ensure accuracy, virtually eliminates the otherwise inevitable delay of live subtitling and generally reduces the workload and stress on the subtitler.
For costs based on your requirements, please contact us via the form on the right.