Subtitling

The process of subtitling comprises four stages:

– Spotting: The definition of the in and out times of individual subtitles so that they are synchronised with the audio, and adhere to the minimum and maximum duration times, taking any shot changes into consideration. At this stage, a raw template file (.SRT .SUB .ASS or .VTT format) is created using specialist software to serve as a source document for the translation phase.

– Translation: Translating the template content from the source language into English, localizing and adapting it while remaining within the character length restriction for each title. The character length restriction can vary slightly, but is based around viewers reading 12 characters per second and an average 2-line subtitle reading/display time of around 6 seconds, equating to an average of 35 characters per line/70 characters per 2-line subtitle. As we are translating the spoken word here, aspects such as register, semantic sets and semantic prosody must be given due consideration during the translation phase.

– Correction: Reviewing the sentence structure, comprehension and overall flow of dialogue. The text produced must flow naturally with the same punctuation, syntax, spelling rules and language conventions. Of course, given the differences in structure between spoken language and written language, the subtitles must be split so that the viewers can easily read and digest them. As with translation and copywriting, the end objective is for the consumer not to register the intervention of the subtitler, much less be distracted by their work. Some of the basic principle criteria are: punctuation, line breaks, hyphens, ellipsis and italics.

– Simulation: Once the spotting, translation and correction stages have been completed, the AV content must be reviewed in a simulation session: a screening with the subtitles displayed on the video screen just as they will appear on the final product. Modifications of text and timing adjustments can be made during the simulation. Strategies that are challenging to implement in the translation environment, such as speech rate adaptation or the use of summarisation as opposed to offsetting, can help to convey the pertinent information and subtexts of a shot/scene without loss of the dialogue pace on screen.

What if I need to localise dialogue out of the source language into a range of target languages?

Many streaming services and media distributors have a requirement to release localised versions of their media simultaneously with the source language version. This is where the English Master Template (EMT) comes into effect. Unlike a standard template, an EMT can be created specifically for use as an interlingua; its use of language is more transparent for target language linguists, highlighting nuance and subtleties that they might perhaps not pick up on when translating from their second language into their first. Again, the EMT is time-coded and the “proper” English subtitle file is referred to as the source throughout the localisation process.

Scroll to Top