|
The
Changing Face of Broadcast Captioning
While often
overlooked, the role of broadcast television captioning continues
to change and expand. New workflows and delivery formats (including
3D) affect the production, transport and transcoding of these essential
components, and new accessibility regulations may increase broadcasters'
requirements in this area. A session at the upcoming NAB Broadcast
Engineering Conference (BEC, April 9 - 14, 2011, Las Vegas, Nev.
- see below for additional information), entitled "The Future
of Television Broadcasting," includes a paper, excerpted
here, which considers existing and emerging issues in captioning
for broadcast. "Captioning for Next Generation Broadcasting"
was authored by Sam Pemberton, Softel USA.
CHANGING
PRODUCTION AND BROADCAST WORKFLOWS - The subtitling component
needs to be closely integrated into the broadcasters overall solution
and, ideally, considered during the initial design of a system.
With a goal of reaching the widest possible audience across multiple
platforms, requiring support for a multitude of output video formats,
the broadcasters' focus is shifting away from the traditional production
systems and transmission chain, towards Digital Asset Management
Systems (DAMS). To aid format and resolution conversions for diverse
distribution formats, many broadcasters want to store video assets
as a single common "mezzanine" format. This represents
the highest quality version. Thereafter, all subsequent broadcast
and streaming versions will be derived from it.
To optimize
repurposing, the storage of subtitle data should align with this
principle and be stored as a high level generic form of subtitle
data. With this approach there are two key overarching methodologies
to consider. One relies on the creation of a "master"
subtitle which has as much information as possible related to the
subtitle, allowing less sophisticated derivatives to be readily
produced. In effect this becomes the "mezzanine" format
subtitle. A "mezzanine" subtitle typically relies on informed
choices being made during the creation/preparation phase for presentational
aspects such as font, color, positional and alignment information,
drop shadow and character edging. Using a mezzanine format enables
the subtitle data to support the media asset over its lifetime,
allowing for elegant, effective and highly automated translation
to various output distribution formats.
CONSIDERING
STEREOSCOPIC 3D - Caption and subtitle technology companies
are now creating and offering tools which allow for 2D subtitles
to be repurposed for 3D content. This involves using a video analysis
tool which builds a "3D object map" from the media assets.
This 3D metadata allows users of compatible subtitle creation systems
to have their software automatically calculate an optimum placement
for each subtitle in the Z-axis. Naturally, the user should always
be given the opportunity to override any default positioning as
chosen by the system. However this automated step enables far more
effective production timelines - almost in-line in fact with the
creation and re-purposing of more traditional 2D subtitles. The
importance of getting the correct depth positioning should not be
underestimated. Poorly placed subtitles can distract and in extreme
case break the 3D experience. The goal is for the subtitles to enhance
and never detract from it.
"BINDING"
CONTENT FOR PRESENTATION - After the creation phase has been
completed, the subtitle data must then be "bound" to the
content, enabling presentation to the viewer when they watch the
programming. This binding can be considered as occurring in one
of three periods of time:
Early binding
- The pre-prepared file is linked to the program content well ahead
of transmission;
Late binding
- Similar to early binding, but occurs closer to air time and only
becomes possible due to faster-than-real-time encoding technologies
Live binding
- For either truly live content or for pre-prepared content which
only becomes available very close to airing thereby eliminating
the possibility of pre-binding subtitles.
In previous
tape-based workflows, pre-prepared content was early-bound by creating
a sub-master tape which would have the subtitles encoded in to the
VBI space on the tape, by inserting into baseband video. Although
this is still possible, it is being largely phased out as it is
so labor intensive and slow. In modern workflows, files are now
either sent for time-of-air transmission (a live bind), or are transcoded
into a file-based video asset (during early or late binding).
Mr. Pemberton
will present this paper on Sunday, April 10, 2011 starting at
10:30 a.m. in room S219 of the Las Vegas Convention Center. The
paper will also be included in its entirety in the 2011 NAB Broadcast
Engineering Conference Proceedings, on sale at the 2011 NAB Show
Store, and available on-line from the NAB
Store after the convention. Other papers being presented during
this session (9:30 a.m. - 12:00 p.m.) include the following:
Non-Real-Time
Delivery of Broadcast Services, Rich Chernock, CTO, Triveni
Digital
Live Sports
Production of 22.2 Multi-Channel Sound for Super Hi-Vision TV,
Tsuyoshi Hinata, principal engineer, Japan Broadcasting Corporation
(NHK) Outside Broadcast Division (This format will also be demonstrated
during the Show at NHK's presentation theater in the International
Research Park, Booth #N233.)
DVB Second
Generation Standards: Commercial and Technical Drivers, Peter
Siebert, executive director, DVB Project Office
Current
Status and Future Prospects of Initiatives for Disaster Prevention
Information Dissemination in Data Broadcasting, Norio Sasaki,
Data Broadcasting Technologies and Applications, Japan Broadcasting
Corporation (NHK)
For additional
conference information visit the NAB
Show website.
|